Text Generation
Transformers
PyTorch
Safetensors
English
llama
conversational
text-generation-inference
Inference Endpoints
tulu-2-dpo-70b / README.md
natolambert's picture
Update README.md
eee6ee9
|
raw
history blame
6.06 kB
metadata
tags:
  - generated_from_trainer
model-index:
  - name: tulu-2-dpo-70b
    results: []
datasets:
  - HuggingFaceH4/ultrafeedback_binarized
language:
  - en
base_model: meta-llama/Llama-2-70b-hf
TuluV2 banner

Model Card for Tulu V2 DPO 70B

Tulu is a series of language models that are trained to act as helpful assistants. Tulu V2 DPO 70B, and is a fine-tuned version of Llama 2 that was trained on on a mix of publicly available, synthetic and human datasets using Direct Preference Optimization (DPO). This model is a strong alternative to Llama 2 70b Chat.

Model description

  • Model type: The flagship model of a suite of instruction and RLHF tuned chat models on a mix of publicly available, synthetic and human-created datasets.
  • Language(s) (NLP): Primarily English
  • License: MIT
  • Finetuned from model: meta-llama/Llama-2-70b-hf

Model Sources

Performance

At the time of release, the Tulu-v2-dpo-70b model is approximately equal to GPT4 on AlpacaEval, and has a score of TODO on MT-Bench. All smaller DPO'd models have strong performance per model size in the category and with lower verbosity (average completion length).

Model Size Alignment MT-Bench (score) AlpacaEval (win rate %)
Tulu-v2-7b 🐪 7B dDPO TODO TODO
Tulu-v2-dpo-7b 🐪 7B dDPO TODO TODO
StableLM-Tuned-α 7B dSFT 2.75 -
MPT-Chat 7B dSFT 5.42 -
Xwin-LMv0.1 7B dPPO 6.19 87.83
Mistral-Instructv0.1 7B - 6.84 -
Zephyr-7b-α 7B dDPO 6.88 -
Zephyr-7b-β 🪁 7B dDPO 7.34 90.60
Tulu-v2-13b 🐪 13B dDPO TODO TODO
Tulu-v2-dpo-13b 🐪 13B dDPO TODO TODO
Falcon-Instruct 40B dSFT 5.17 45.71
Guanaco 65B SFT 6.41 71.80
Llama2-Chat 70B RLHF 6.86 92.66
Vicuna v1.3 33B dSFT 7.12 88.99
WizardLM v1.0 70B dSFT 7.71 -
Xwin-LM v0.1 70B dPPO - 95.57
Tulu-v2-70b 🐪 70B dDPO TODO TODO
Tulu-v2-dpo-70b 🐪 70B dDPO TODO TODO
GPT-3.5-turbo - RLHF 7.94 89.37
Claude 2 - RLHF 8.06 91.36
GPT-4 - RLHF 8.99 95.28

Intended uses & limitations

The model was initially fine-tuned on a filtered and preprocessed of the Tulu V2 mix dataset (TODO add link), which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs. We then further aligned the model with a Jax DPO trainer built on EasyLM on the openbmb/UltraFeedback dataset, which contains 64k prompts and model completions that are ranked by GPT-4.

Here's how you can run the model using the pipeline() function from 🤗 Transformers:

# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate

import torch
from transformers import pipeline

pipe = pipeline("text-generation", model="HuggingFaceH4/tulu-2-dpo-70b", torch_dtype=torch.bfloat16, device_map="auto")

# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {
        "role": "system",
        "content": "You are a friendly chatbot who always responds in the style of a pirate",
    },
    {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food!

Bias, Risks, and Limitations

The Tulu models have not been aligned to generate safe completions within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base Llama 2 models, however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • total_train_batch_size: 32
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3.0

Citation

If you find Tulu V2 is useful in your work, please cite it with:

TODO

Model card adapted from Zephyr Beta