Neko-Institute-of-Science
commited on
Commit
•
f190715
1
Parent(s):
0dbc406
Add usage guide
Browse files
README.md
CHANGED
@@ -16,5 +16,13 @@ To use this LoRA please replace the config files to ones of Vicuna and I will ha
|
|
16 |
**checkpoint-9728-failed**: This first test used the original format from the convert tool, but it was later found out this caused broken context. It would work as expected from the initial prompt but the moment you asked it a question about anything in the past it would say something random.
|
17 |
I have since restarted training with the new format B from the tool and it seems to have fixed the issue with the original format. I will be uploading checkpoints everyday until it's finished or other issues are found.
|
18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
# Track Training?
|
20 |
https://wandb.ai/neko-science/VicUnLocked?workspace=user-neko-science
|
|
|
16 |
**checkpoint-9728-failed**: This first test used the original format from the convert tool, but it was later found out this caused broken context. It would work as expected from the initial prompt but the moment you asked it a question about anything in the past it would say something random.
|
17 |
I have since restarted training with the new format B from the tool and it seems to have fixed the issue with the original format. I will be uploading checkpoints everyday until it's finished or other issues are found.
|
18 |
|
19 |
+
# How to test?
|
20 |
+
1. Download LLaMA-13B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-13B-HF
|
21 |
+
2. Replace special_tokens_map.json and tokenizer_config.json using the ones on this repo.
|
22 |
+
3. Rename LLaMA-13B-HF to vicuna-13b
|
23 |
+
4. Load ooba: ```python server.py --listen --model vicuna-13b --load-in-8bit --chat --lora checkpoint-xxxx```
|
24 |
+
5. Instruct mode: Vicuna-v1 it will load Vicuna-v0 by defualt
|
25 |
+
|
26 |
+
|
27 |
# Track Training?
|
28 |
https://wandb.ai/neko-science/VicUnLocked?workspace=user-neko-science
|