Edit model card

Overview

Fine-tuned Llama-2 13B with an uncensored/unfiltered Wizard-Vicuna conversation dataset ehartford/wizard_vicuna_70k_unfiltered. Used QLoRA for fine-tuning. Trained for one epoch on a two 24GB GPU (NVIDIA RTX 3090) instance, took ~26.5 hours to train.

{'train_runtime': 95229.7197, 'train_samples_per_second': 0.363, 'train_steps_per_second': 0.091, 'train_loss': 0.5828390517308127, 'epoch': 1.0}
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 8649/8649 [26:27:09<00:00, 11.01s/it]
Training complete, adapter model saved in models//llama2_13b_chat_uncensored_adapter

The version here is the fp16 HuggingFace model.

GGML & GPTQ versions

Thanks to TheBloke, he has created the GGML and GPTQ versions:

Prompt style

The model was trained with the following prompt style:

### HUMAN:
Hello

### RESPONSE:
Hi, how are you?

### HUMAN:
I'm fine.

### RESPONSE:
How can I help you?
...

Training code

Code used to train the model is available here.

To reproduce the results:

git clone https://github.com/georgesung/llm_qlora
cd llm_qlora
pip install -r requirements.txt
python train.py configs/llama2_13b_chat_uncensored.yaml

Fine-tuning guide

https://georgesung.github.io/ai/qlora-ift/

Downloads last month
12
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train arogov/llama2_13b_chat_uncensored