uncensored
Neko-Institute-of-Science commited on
Commit
ef37c45
1 Parent(s): df1adf0
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -1,6 +1,8 @@
1
  ---
2
  datasets:
3
  - gozfarb/ShareGPT_Vicuna_unfiltered
 
 
4
  ---
5
  **TEST FINISHED FOR NOW. I MOVED TO 30B training.**
6
 
@@ -20,10 +22,8 @@ I have since restarted training with the new format B from the tool and it seems
20
 
21
  # How to test?
22
  1. Download LLaMA-13B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-13B-HF
23
- 2. Replace special_tokens_map.json and tokenizer_config.json using the ones on this repo.
24
- 3. Rename LLaMA-13B-HF to vicuna-13b
25
- 4. Load ooba: ```python server.py --listen --model vicuna-13b --load-in-8bit --chat --lora checkpoint-xxxx```
26
- 5. Instruct mode: Vicuna-v1 it will load Vicuna-v0 by defualt
27
 
28
 
29
  # Training LOG
 
1
  ---
2
  datasets:
3
  - gozfarb/ShareGPT_Vicuna_unfiltered
4
+ tags:
5
+ - uncensored
6
  ---
7
  **TEST FINISHED FOR NOW. I MOVED TO 30B training.**
8
 
 
22
 
23
  # How to test?
24
  1. Download LLaMA-13B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-13B-HF
25
+ 3. Load ooba: ```python server.py --listen --model LLaMA-13B-HF --load-in-8bit --chat --lora checkpoint-xxxx```
26
+ 4. Instruct mode: Vicuna-v1
 
 
27
 
28
 
29
  # Training LOG