uncensored
File size: 1,386 Bytes
21608c6
 
 
5b0a0ca
ef37c45
 
21608c6
df1adf0
 
21608c6
 
 
e35ebcb
21608c6
 
5b1e881
21608c6
df1adf0
e35ebcb
 
 
df1adf0
e35ebcb
f190715
 
0c2b8a0
ef37c45
 
f190715
 
df1adf0
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
datasets:
- gozfarb/ShareGPT_Vicuna_unfiltered
- Aeala/ShareGPT_Vicuna_unfiltered
tags:
- uncensored
---
**TEST FINISHED FOR NOW. I MOVED TO 30B training.**

# Convert tools
https://github.com/practicaldreamer/vicuna_to_alpaca

# Training tool
https://github.com/oobabooga/text-generation-webui

ATM I'm using v4.3 of the dataset and training full context.

This LoRA is already feels fully functional.
To use this LoRA please replace the config files to ones of Vicuna and I will have them here. Other than that use normal llama then replace the config files then load LoRA.

**checkpoint-9728-failed**: This first test used the original format from the convert tool, but it was later found out this caused broken context. It would work as expected from the initial prompt but the moment you asked it a question about anything in the past it would say something random.
I have since restarted training with the new format B from the tool and it seems to have fixed the issue with the original format.

# How to test?
1. Download LLaMA-13B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-13B-HF
2. Download the checkpoint-xxxx you want into the loras folder.
3. Load ooba: ```python server.py --listen --model LLaMA-13B-HF --load-in-8bit --chat --lora checkpoint-xxxx```
4. Instruct mode: Vicuna-v1


# Training LOG
https://wandb.ai/neko-science/VicUnLocked/runs/cas6am7s