uncensored
Neko-Institute-of-Science commited on
Commit
01e308f
1 Parent(s): 1d30ac2

Training Done.

Browse files
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -17,12 +17,15 @@ Also I will be uploading checkpoints almost everyday. I could train another epoc
17
 
18
  Update: Since I will not be training over 1 epoch @Aeala is training for the full 3 https://huggingface.co/Aeala/VicUnlocked-alpaca-half-30b-LoRA but it's half ctx if you care about that. Also @Aeala's just about done.
19
 
 
 
20
  # How to test?
21
  1. Download LLaMA-30B-HF if you have not: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
22
- 2. Download the checkpoint-xxxx folder you want and put it in the loras folder.
23
- 3. Load ooba: ```python server.py --listen --model LLaMA-30B-HF --load-in-8bit --chat --lora checkpoint-xxxx```
24
- 4. Select instruct and chose Vicuna-v1.1 template.
 
25
 
26
 
27
- # Want to see it Training?
28
  https://wandb.ai/neko-science/VicUnLocked/runs/vx8yzwi7
 
17
 
18
  Update: Since I will not be training over 1 epoch @Aeala is training for the full 3 https://huggingface.co/Aeala/VicUnlocked-alpaca-half-30b-LoRA but it's half ctx if you care about that. Also @Aeala's just about done.
19
 
20
+ Update: Training Finished at Epoch 1, These 8 days sure felt long. I only have one A6000 lads there's only so much I can do. Also RIP gozfarb IDK what happened to him.
21
+
22
  # How to test?
23
  1. Download LLaMA-30B-HF if you have not: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
24
+ 2. Make a folder called VicUnLocked-30b-LoRA in the loras folder.
25
+ 3. Download adapter_config.json and adapter_model.bin into VicUnLocked-30b-LoRA.
26
+ 4. Load ooba: ```python server.py --listen --model LLaMA-30B-HF --load-in-8bit --chat --lora VicUnLocked-30b-LoRA```
27
+ 5. Select instruct and chose Vicuna-v1.1 template.
28
 
29
 
30
+ # Training Log
31
  https://wandb.ai/neko-science/VicUnLocked/runs/vx8yzwi7