uncensored
Neko-Institute-of-Science commited on
Commit
df1adf0
1 Parent(s): bce0efd

Update status

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -2,6 +2,8 @@
2
  datasets:
3
  - gozfarb/ShareGPT_Vicuna_unfiltered
4
  ---
 
 
5
  # Convert tools
6
  https://github.com/practicaldreamer/vicuna_to_alpaca
7
 
@@ -10,11 +12,11 @@ https://github.com/oobabooga/text-generation-webui
10
 
11
  ATM I'm using v4.3 of the dataset and training full context.
12
 
13
- This LoRA is already pretty functional but far from finished training. ETA from the start 200 hours.
14
  To use this LoRA please replace the config files to ones of Vicuna and I will have them here. Other than that use normal llama then replace the config files then load LoRA.
15
 
16
  **checkpoint-9728-failed**: This first test used the original format from the convert tool, but it was later found out this caused broken context. It would work as expected from the initial prompt but the moment you asked it a question about anything in the past it would say something random.
17
- I have since restarted training with the new format B from the tool and it seems to have fixed the issue with the original format. I will be uploading checkpoints everyday until it's finished or other issues are found.
18
 
19
  # How to test?
20
  1. Download LLaMA-13B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-13B-HF
@@ -24,5 +26,5 @@ I have since restarted training with the new format B from the tool and it seems
24
  5. Instruct mode: Vicuna-v1 it will load Vicuna-v0 by defualt
25
 
26
 
27
- # Track Training?
28
- https://wandb.ai/neko-science/VicUnLocked?workspace=user-neko-science
 
2
  datasets:
3
  - gozfarb/ShareGPT_Vicuna_unfiltered
4
  ---
5
+ **TEST FINISHED FOR NOW. I MOVED TO 30B training.**
6
+
7
  # Convert tools
8
  https://github.com/practicaldreamer/vicuna_to_alpaca
9
 
 
12
 
13
  ATM I'm using v4.3 of the dataset and training full context.
14
 
15
+ This LoRA is already feels fully functional.
16
  To use this LoRA please replace the config files to ones of Vicuna and I will have them here. Other than that use normal llama then replace the config files then load LoRA.
17
 
18
  **checkpoint-9728-failed**: This first test used the original format from the convert tool, but it was later found out this caused broken context. It would work as expected from the initial prompt but the moment you asked it a question about anything in the past it would say something random.
19
+ I have since restarted training with the new format B from the tool and it seems to have fixed the issue with the original format.
20
 
21
  # How to test?
22
  1. Download LLaMA-13B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-13B-HF
 
26
  5. Instruct mode: Vicuna-v1 it will load Vicuna-v0 by defualt
27
 
28
 
29
+ # Training LOG
30
+ https://wandb.ai/neko-science/VicUnLocked/runs/cas6am7s