Text Generation
Transformers
PyTorch
Safetensors
English
llama
conversational
text-generation-inference
Inference Endpoints
hamishivi commited on
Commit
4c937e6
1 Parent(s): 22c150c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -8
README.md CHANGED
@@ -36,7 +36,7 @@ This model is a strong alternative to Llama 2 70b Chat.
36
 
37
  ## Performance
38
 
39
- At the time of release, the Tulu-v2-dpo-70b model is approximately equal to GPT4 on AlpacaEval, and has a score of TODO on MT-Bench.
40
  All smaller DPO'd models have strong performance per model size in the category and with lower verbosity (average completion length).
41
  | Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) |
42
  |-------------|-----|----|---------------|--------------|
@@ -65,11 +65,11 @@ All smaller DPO'd models have strong performance per model size in the category
65
 
66
  ## Intended uses & limitations
67
 
68
- The model was initially fine-tuned on a filtered and preprocessed of the Tulu V2 mix dataset (TODO add link), which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs.
69
  We then further aligned the model with a [Jax DPO trainer](https://github.com/hamishivi/EasyLM/blob/main/EasyLM/models/llama/llama_train_dpo.py) built on [EasyLM](https://github.com/young-geng/EasyLM) on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contains 64k prompts and model completions that are ranked by GPT-4.
70
 
71
 
72
- <!-- You can find the datasets used for training Tulu V2 [here]() -->
73
 
74
  Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
75
 
@@ -100,7 +100,7 @@ print(outputs[0]["generated_text"])
100
  # How many helicopters can a human eat in one sitting?</s>
101
  # <|assistant|>
102
  # Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food!
103
- ```
104
 
105
  ## Bias, Risks, and Limitations
106
 
@@ -112,10 +112,9 @@ It is also unknown what the size and composition of the corpus was used to train
112
 
113
  ### Training hyperparameters
114
 
115
- The following hyperparameters were used during training:
116
  - learning_rate: 5e-07
117
  - total_train_batch_size: 32
118
- - total_eval_batch_size: 64
119
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
120
  - lr_scheduler_type: linear
121
  - lr_scheduler_warmup_ratio: 0.1
@@ -124,10 +123,16 @@ The following hyperparameters were used during training:
124
 
125
  ## Citation
126
 
127
- If you find Tulu V2 is useful in your work, please cite it with:
128
 
129
  ```
130
- TODO
 
 
 
 
 
 
131
  ```
132
 
133
  *Model card adapted from [Zephyr Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta/blob/main/README.md)*
 
36
 
37
  ## Performance
38
 
39
+ At the time of release, the Tulu-v2-dpo-70b model is approximately equal to GPT4 on AlpacaEval, and has a score of 7.89 on MT-Bench.
40
  All smaller DPO'd models have strong performance per model size in the category and with lower verbosity (average completion length).
41
  | Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) |
42
  |-------------|-----|----|---------------|--------------|
 
65
 
66
  ## Intended uses & limitations
67
 
68
+ The model was initially fine-tuned on a filtered and preprocessed of the [Tulu V2 mix dataset](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs.
69
  We then further aligned the model with a [Jax DPO trainer](https://github.com/hamishivi/EasyLM/blob/main/EasyLM/models/llama/llama_train_dpo.py) built on [EasyLM](https://github.com/young-geng/EasyLM) on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contains 64k prompts and model completions that are ranked by GPT-4.
70
 
71
 
72
+ <!-- You can find the datasets used for training Tulu V2 [here]()
73
 
74
  Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
75
 
 
100
  # How many helicopters can a human eat in one sitting?</s>
101
  # <|assistant|>
102
  # Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food!
103
+ ```-->
104
 
105
  ## Bias, Risks, and Limitations
106
 
 
112
 
113
  ### Training hyperparameters
114
 
115
+ The following hyperparameters were used during DPO training:
116
  - learning_rate: 5e-07
117
  - total_train_batch_size: 32
 
118
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
119
  - lr_scheduler_type: linear
120
  - lr_scheduler_warmup_ratio: 0.1
 
123
 
124
  ## Citation
125
 
126
+ If you find Tulu 2 is useful in your work, please cite it with:
127
 
128
  ```
129
+ @misc{ivison2023changing,
130
+ title={Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2},
131
+ author={Hamish Ivison and Yizhong Wang and Valentina Pyatkin and Nathan Lambert and Matthew Peters and Pradeep Dasigi and Joel Jang and David Wadden and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
132
+ year={2023},
133
+ archivePrefix={arXiv},
134
+ primaryClass={cs.CL}
135
+ }
136
  ```
137
 
138
  *Model card adapted from [Zephyr Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta/blob/main/README.md)*