uncensored
Neko-Institute-of-Science commited on
Commit
c47853c
1 Parent(s): ce1cdcb

fix and add info

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -10,8 +10,12 @@ https://github.com/oobabooga/text-generation-webui
10
 
11
  ATM I'm using 2023.05.04v0 of the dataset and training full context.
12
 
 
 
 
 
13
  # How to test?
14
- 1. Download LLaMA-13B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
15
  2. Replace special_tokens_map.json and tokenizer_config.json using the ones on this repo.
16
  3. Rename LLaMA-30B-HF to vicuna-30b
17
  4. Load ooba: ```python server.py --listen --model vicuna-30b --load-in-8bit --chat --lora checkpoint-xxxx```
 
10
 
11
  ATM I'm using 2023.05.04v0 of the dataset and training full context.
12
 
13
+ # Notes:
14
+ So im only training 1 epoch as full context 30b takes a long time to train.
15
+ My 1 epoch will take me 8 days lol but lucly the LoRA feels fully functinal at epoch 1 as shown on my 13b one.
16
+
17
  # How to test?
18
+ 1. Download LLaMA-30B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
19
  2. Replace special_tokens_map.json and tokenizer_config.json using the ones on this repo.
20
  3. Rename LLaMA-30B-HF to vicuna-30b
21
  4. Load ooba: ```python server.py --listen --model vicuna-30b --load-in-8bit --chat --lora checkpoint-xxxx```