Edit model card
TheBlokeAI

VicUnlocked-30B-LoRA GPTQ

This is an HF format float16 repo of Neko Institute of Science's VicUnLocked 30B LoRA.

It is the result merging the above LoRA with the original LLaMA 30B.

Repositories available

Discord

For further support, and discussions on these models and AI in general, join us at:

TheBloke AI's Discord server

Thanks, and how to contribute.

Thanks to the chirper.ai team!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

Patreon special mentions: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.

Thank you to all my generous patrons and donaters!

Original model card

Convert tools

https://github.com/practicaldreamer/vicuna_to_alpaca

Training tool

https://github.com/oobabooga/text-generation-webui

ATM I'm using 2023.05.04v0 of the dataset and training full context.

Notes:

So I will only be training 1 epoch, as full context 30b takes so long to train. This 1 epoch will take me 8 days lol but luckily these LoRA feels fully functinal at epoch 1 as shown on my 13b one. Also I will be uploading checkpoints almost everyday. I could train another epoch if there's enough want for it.

Update: Since I will not be training over 1 epoch @Aeala is training for the full 3 https://huggingface.co/Aeala/VicUnlocked-alpaca-half-30b-LoRA but it's half ctx if you care about that. Also @Aeala's just about done.

Update: Training Finished at Epoch 1, These 8 days sure felt long. I only have one A6000 lads there's only so much I can do. Also RIP gozfarb IDK what happened to him.

How to test?

  1. Download LLaMA-30B-HF if you have not: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
  2. Make a folder called VicUnLocked-30b-LoRA in the loras folder.
  3. Download adapter_config.json and adapter_model.bin into VicUnLocked-30b-LoRA.
  4. Load ooba: python server.py --listen --model LLaMA-30B-HF --load-in-8bit --chat --lora VicUnLocked-30b-LoRA
  5. Select instruct and chose Vicuna-v1.1 template.

Training Log

https://wandb.ai/neko-science/VicUnLocked/runs/vx8yzwi7

Downloads last month
3,109

Spaces using TheBloke/VicUnlocked-30B-LoRA-HF 14