Overview
Fine-tuned Llama-2 7B with a 35k subset of the OpenOrca dataset georgesung/OpenOrca_35k. Used QLoRA for fine-tuning. Trained for one epoch on a 24GB GPU (NVIDIA A10G) instance.
Prompt style
The model was trained with the following prompt style:
### System:
You are a helpful AI assistant.
### Instruction:
Hello
### Response:
Hi, how can I help you?
Training code
Code used to train the model is available here.
To reproduce the results:
git clone https://github.com/georgesung/llm_qlora
cd llm_qlora
pip install -r requirements.txt
python train.py configs/llama2_7b_orca_35k.yaml
Fine-tuning guide
- Downloads last month
- 15
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.