uncensored
Neko-Institute-of-Science commited on
Commit
f92bcb1
1 Parent(s): bdc4345

Update usage instructions.

Browse files

So in my tests I believe you do not have to overwrite your config files. Unless you cant choose your instruct template.

Files changed (1) hide show
  1. README.md +4 -6
README.md CHANGED
@@ -16,12 +16,10 @@ This 1 epoch will take me 8 days lol but luckily these LoRA feels fully functina
16
  Also I will be uploading checkpoints almost everyday. I could train another epoch if there's enough want for it.
17
 
18
  # How to test?
19
- 1. Download LLaMA-30B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
20
- 2. Replace special_tokens_map.json and tokenizer_config.json using the ones on this repo.
21
- 3. Rename LLaMA-30B-HF to vicuna-30b
22
- 4. Download the checkpoint-xxxx you want and put it in the loras folder.
23
- 5. Load ooba: ```python server.py --listen --model vicuna-30b --load-in-8bit --chat --lora checkpoint-xxxx```
24
- 6. Instruct mode: Vicuna-v1, ooba will load Vicuna-v0 by defualt
25
 
26
 
27
  # Want to see it Training?
 
16
  Also I will be uploading checkpoints almost everyday. I could train another epoch if there's enough want for it.
17
 
18
  # How to test?
19
+ 1. Download LLaMA-30B-HF if you have not: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
20
+ 2. Download the checkpoint-xxxx folder you want and put it in the loras folder.
21
+ 3. Load ooba: ```python server.py --listen --model LLaMA-30B-HF --load-in-8bit --chat --lora checkpoint-xxxx```
22
+ 4. Select instruct and chose Vicuna-v1.1 template.
 
 
23
 
24
 
25
  # Want to see it Training?