Neko-Institute-of-Science
commited on
Commit
•
e79cfc9
1
Parent(s):
3bb8c3b
Add extra info.
Browse files
README.md
CHANGED
@@ -15,6 +15,8 @@ So I will only be training 1 epoch, as full context 30b takes so long to train.
|
|
15 |
This 1 epoch will take me 8 days lol but luckily these LoRA feels fully functinal at epoch 1 as shown on my 13b one.
|
16 |
Also I will be uploading checkpoints almost everyday. I could train another epoch if there's enough want for it.
|
17 |
|
|
|
|
|
18 |
# How to test?
|
19 |
1. Download LLaMA-30B-HF if you have not: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
|
20 |
2. Download the checkpoint-xxxx folder you want and put it in the loras folder.
|
|
|
15 |
This 1 epoch will take me 8 days lol but luckily these LoRA feels fully functinal at epoch 1 as shown on my 13b one.
|
16 |
Also I will be uploading checkpoints almost everyday. I could train another epoch if there's enough want for it.
|
17 |
|
18 |
+
Update: Since I will not be training over 1 epoch @Aeala is training for the full 3 https://huggingface.co/Aeala/VicUnlocked-alpaca-half-30b-LoRA but it's half ctx if you care about that. Also @Aeala's just about done.
|
19 |
+
|
20 |
# How to test?
|
21 |
1. Download LLaMA-30B-HF if you have not: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
|
22 |
2. Download the checkpoint-xxxx folder you want and put it in the loras folder.
|