Text Generation
Transformers
PyTorch
English
llama
conversational
text-generation-inference
Inference Endpoints
File size: 10,593 Bytes
d2d02b4
 
 
 
 
 
 
 
 
a07acd4
d2d02b4
 
 
 
 
 
 
 
 
 
 
 
 
 
1548183
d2d02b4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4b62b90
d2d02b4
c9c93e5
 
 
 
1548183
c9c93e5
 
4a372aa
c9c93e5
 
 
 
754fedb
c9c93e5
d2d02b4
 
 
 
 
 
 
 
 
 
 
 
883f3b0
 
ec4d2dd
ea7d00e
 
ec4d2dd
 
ea7d00e
ac51e98
ea7d00e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ec4d2dd
883f3b0
d2d02b4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1548183
d2d02b4
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
---
model-index:
- name: tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm
  results: []
datasets:
- allenai/tulu-2.5-preference-data
- allenai/tulu-v2-sft-mixture
language:
- en
base_model: allenai/tulu-2-13b
license: apache-2.0
---
<center>
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-2.5/tulu_25_banner.png" alt="Tulu 2.5 banner image" width="800px"/>
</center>

# Model Card for Tulu V2.5 PPO 13B - UltraFeedback Mean w. 70B UltraFeedback RM

Tulu is a series of language models that are trained to act as helpful assistants.
Tulu V2.5 is a series of models trained using DPO and PPO starting from the [Tulu 2 suite](https://huggingface.co/collections/allenai/tulu-v2-suite-6551b56e743e6349aab45101).
This model is trained on the UltraFeedback dataset (using the per-aspect/fine-grained scores for deciding chosen and rejected) using PPO.
We used a 70B RM trained on the UltraFeedback dataset, and then used the UltraFeedback prompts during PPO training.

For more details, read the paper:
[Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback](https://arxiv.org/abs/2406.09279).


## .Model description

- **Model type:** One model belonging to a suite of RLHF tuned chat models on a mix of publicly available, synthetic and human-created datasets.
- **Language(s) (NLP):** English
- **License:** Apache 2.0.
- **Finetuned from model:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)

### Model Sources

- **Repository:** https://github.com/allenai/open-instruct
- **Dataset:** Data used to train this model can be found [here](https://huggingface.co/datasets/allenai/tulu-2.5-preference-data) - specifically the `ultrafeedback_mean_aspects` split. Only the prompts were used.
- **Model Family:** The collection of related models can be found [here](https://huggingface.co/collections/allenai/tulu-v25-suite-66676520fd578080e126f618).
- **Reward Model:** The reward model used during PPO training can be found [here](https://huggingface.co/allenai/tulu-v2.5-70b-uf-rm), and the data used to train it [here](https://huggingface.co/datasets/allenai/tulu-2.5-preference-data) - specifically the `ultrafeedback_mean_aspects` split.
- **Value Model:** The value model trained during PPO training can be found [here](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm-value).

## Results

Tulu V2.5 PPO is trained to be a generalist model, and matches or outperforms Tulu 2+DPO 13B.
It even beats Tulu 2+DPO 70B in some cases, although it loses out in harder reasoning tasks.
For details on training and evaluation, read [our paper](https://arxiv.org/abs/2406.09279)!


| Model | Size | Alignment | GSM8k 8-shot CoT Acc. | AlpacaEval 2 Winrate (LC) | Average Perf. across Open-Instruct evals  |
|-|-|-|-|-|-|
| **Tulu V2.5 PPO 13B (this model)** | 13B | PPO with 70B RM | 58.0 | **26.7** | 62.8 |
| **Tulu V2 DPO 13B** | 13B | DPO | 50.5 | 16.0 | 61.0 |
| **Tulu V2 SFT 13B** | 13B | - | 46.0 | 10.4 | 62.8 |
| **Tulu V2 DPO 70B** | 70B | DPO | **71.5** | 21.2 | **69.4** |

## Input Format

The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```

For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
We have included a [chat template](https://huggingface.co/docs/transformers/main/en/chat_templating) in the tokenizer implementing this template.

## Model Family

[Preference Data](https://huggingface.co/datasets/allenai/tulu-2.5-preference-data), [Prompts Data](https://huggingface.co/datasets/allenai/tulu-2.5-prompts) | DPO Models | PPO Models | Reward Models | Value Models |
|-------------|-------------|-------------|---------------|---------------|
| ultrafeedback_mean_aspects | [tulu-v2.5-dpo-13b-uf-mean](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-uf-mean) | [tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm) | [tulu-v2.5-70b-uf-rm](https://huggingface.co/allenai/tulu-v2.5-70b-uf-rm) | [tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm-value](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm-value) |
| preference_big_mixture |  =  | [tulu-v2.5-ppo-13b-uf-mean-13b-mix-rm](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-uf-mean-13b-mix-rm) | [tulu-v2.5-13b-preference-mix-rm](https://huggingface.co/allenai/tulu-v2.5-13b-preference-mix-rm) | [tulu-v2.5-ppo-13b-uf-mean-13b-mix-rm-value](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-uf-mean-13b-mix-rm-value) |
| preference_big_mixture |  =  | [tulu-v2.5-ppo-13b-uf-mean-70b-mix-rm](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-uf-mean-70b-mix-rm) | [tulu-v2.5-70b-preference-mix-rm](https://huggingface.co/allenai/tulu-v2.5-70b-preference-mix-rm) | [tulu-v2.5-ppo-13b-uf-mean-70b-mix-rm-value](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-uf-mean-70b-mix-rm-value) |
| ultrafeedback_mean_aspects |  =  | [tulu-v2.5-ppo-13b-uf-mean](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-uf-mean) | [tulu-v2.5-13b-uf-rm](https://huggingface.co/allenai/tulu-v2.5-13b-uf-rm) | [tulu-v2.5-ppo-13b-uf-mean-13b-uf-rm-value](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-uf-mean-13b-uf-rm-value) |
| ultrafeedback_mean_aspects |  =  | [tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm-mixed-prompts](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm-mixed-prompts) | [tulu-v2.5-70b-uf-rm](https://huggingface.co/allenai/tulu-v2.5-70b-uf-rm) * with extra prompts | [tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm-mixed-prompts-value](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm-mixed-prompts-value) |
| hh_rlhf_60k | [tulu-v2.5-dpo-13b-hh-rlhf-60k](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-hh-rlhf-60k) | [tulu-v2.5-ppo-13b-hh-rlhf-60k](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-hh-rlhf-60k) | [tulu-v2.5-13b-hh-rlhf-60k-rm](https://huggingface.co/allenai/tulu-v2.5-13b-hh-rlhf-60k-rm) |  |
| chatbot_arena_2023 | [tulu-v2.5-dpo-13b-chatbot-arena-2023](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-chatbot-arena-2023) | [tulu-v2.5-ppo-13b-chatbot-arena-2023](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-chatbot-arena-2023) | [tulu-v2.5-13b-chatbot-arena-2023-rm](https://huggingface.co/allenai/tulu-v2.5-13b-chatbot-arena-2023-rm) |  |
| stack_exchange_60k | [tulu-v2.5-dpo-13b-stackexchange-60k](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-stackexchange-60k) | [tulu-v2.5-ppo-13b-stackexchange-60k](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-stackexchange-60k) | [tulu-v2.5-13b-stackexchange-60k-rm](https://huggingface.co/allenai/tulu-v2.5-13b-stackexchange-60k-rm) |  |
| nectar_60k |  N/A | [tulu-v2.5-ppo-13b-nectar-60k](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-nectar-60k) | [tulu-v2.5-13b-nectar-60k-rm](https://huggingface.co/allenai/tulu-v2.5-13b-nectar-60k-rm) |  |
| nectar | [tulu-v2.5-dpo-13b-nectar](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-nectar) |  |  |  |
| helpsteer | [tulu-v2.5-dpo-13b-helpsteer](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-helpsteer) |  |  |  |
| shp2 | [tulu-v2.5-dpo-13b-shp2](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-shp2) |  |  |  |
| stack_exchange_paired | [tulu-v2.5-dpo-13b-stackexchange](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-stackexchange) |  |  |  |
| ultrafeedback_overall | [tulu-v2.5-dpo-13b-uf-overall](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-uf-overall) |  |  |  |
| capybara | [tulu-v2.5-dpo-13b-capybara](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-capybara) |  |  |  |
| prm800k_pairs_phase2 | [tulu-v2.5-dpo-13b-prm-phase-2](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-prm-phase-2) |  |  |  |
| hh_rlhf | [tulu-v2.5-dpo-13b-hh-rlhf](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-hh-rlhf) |  |  |  |
| chatbot_arena_2024 | [tulu-v2.5-dpo-13b-chatbot-arena-2024](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-chatbot-arena-2024) |  |  |  |
| alpaca_farm_human_pref | [tulu-v2.5-dpo-13b-alpacafarm-human-pref](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-alpacafarm-human-pref) |  |  |  |
| alpaca_farm_gpt4_pref | [tulu-v2.5-dpo-13b-alpacafarm-gpt4-pref](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-alpacafarm-gpt4-pref) |  |  |  |
| orca_dpo_pairs | [tulu-v2.5-dpo-13b-argilla-orca-pairs](https://huggingface.co/allenai/tulu-v2.5-dpo-13b-argilla-orca-pairs) |  |  |  |

*The extra prompts are all the prompts in the prompts dataset. Default only uses the split `ultrafeedback_prompts`.

## Intended uses & limitations

The model was initially fine-tuned on a filtered and preprocessed of the [Tulu V2 mix dataset](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs. 
We then further aligned the model with a [Jax DPO trainer](https://github.com/hamishivi/EasyLM/blob/main/EasyLM/models/llama/llama_train_dpo.py) built on [EasyLM](https://github.com/young-geng/EasyLM) on the dataset mentioned above.

## Bias, Risks, and Limitations

The Tulu models have not been aligned to generate safe completions within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). 
It is also unknown what the size and composition of the corpus was used to train the base Llama 2 models, however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.


### Training hyperparameters

The following hyperparameters were used during PPO training:
- learning_rate: 1e-06
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
- KL penalty coefficient: 0.0325 (we found the larger RM benefited from a smaller KL penalty)

## Citation

If you find Tulu 2.5 is useful in your work, please cite it with:

```
@misc{ivison2024unpacking,
      title={{Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback}}, 
      author={{Hamish Ivison and Yizhong Wang and Jiacheng Liu and Ellen Wu and Valentina Pyatkin and Nathan Lambert and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi}}
      year={2024},
      eprint={2406.09279},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```