uncensored
Edit model card

TEST FINISHED FOR NOW. I MOVED TO 30B training.

Convert tools

https://github.com/practicaldreamer/vicuna_to_alpaca

Training tool

https://github.com/oobabooga/text-generation-webui

ATM I'm using v4.3 of the dataset and training full context.

This LoRA is already feels fully functional. To use this LoRA please replace the config files to ones of Vicuna and I will have them here. Other than that use normal llama then replace the config files then load LoRA.

checkpoint-9728-failed: This first test used the original format from the convert tool, but it was later found out this caused broken context. It would work as expected from the initial prompt but the moment you asked it a question about anything in the past it would say something random. I have since restarted training with the new format B from the tool and it seems to have fixed the issue with the original format.

How to test?

  1. Download LLaMA-13B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-13B-HF
  2. Download the checkpoint-xxxx you want into the loras folder.
  3. Load ooba: python server.py --listen --model LLaMA-13B-HF --load-in-8bit --chat --lora checkpoint-xxxx
  4. Instruct mode: Vicuna-v1

Training LOG

https://wandb.ai/neko-science/VicUnLocked/runs/cas6am7s

Downloads last month
0
Unable to determine this model's library. Check the docs .

Dataset used to train Neko-Institute-of-Science/VicUnLocked-13b-LoRA