stack-llama-2 / README.md
kashif's picture
kashif HF staff
Update README.md
4cbd52c
|
raw
history blame
No virus
1.79 kB
---
license: bigscience-openrail-m
datasets:
- lvwerra/stack-exchange-paired
language:
- en
tags:
- trl
- transformers
- rlhf
---
# Stack-Llama-2
[DPO](https://github.com/eric-mitchell/direct-preference-optimization) fine-tuned [Llama-2 7B model](https://huggingface.co/meta-llama/Llama-2-7b). The model is designed to generate human-like responses to questions in Stack Exchange domains of programming, mathematics, physics, and more. For more info check out the [blog post](https://huggingface.co/blog/dpo-trl) and github [example](https://github.com/lvwerra/trl/tree/main/examples/research_projects/stack_llama_2/scripts).
## Trianing Details
### Training Data
Original datasets are described in [the LLaMA Model Card](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md#training-dataset).
Fine-tuning datasets for this model are based on [Stack Exchange Paired](https://huggingface.co/datasets/lvwerra/stack-exchange-paired), which consists of questions and answers from various domains in Stack Exchange, such as programming, mathematics, physics, and more. Specifically:
**Traditional Fine-tuning:** [https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/finetune](https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/finetune)
**DPO Training:** [https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/rl](https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/rl)
### Training Procedure
The model was first fine-tuned on the Stack Exchange question and answer pairs and then fine-tuned via the DPO training procedure using a Stack Exchange Reward Model.
It is trained to respond to prompts with the following template:
```
Question: <Query>
Answer: <Response>
```