hamishivi's picture
Update README.md
934eec9
metadata
datasets:
  - anon8231489123/ShareGPT_Vicuna_unfiltered
language:
  - en
extra_gated_prompt: >-
  To request access to the models, please fill out this form, and we'll review
  and let you know if your use case is approved. The information you provide
  below will be used solely to assess eligibility to access these models.
extra_gated_fields:
  First Name: text
  Last Name: text
  Institution: text
  Country (where user is located): text
  Intended Use: text
  Previous Related Publications: text
  I agree to abide by the terms of the license associated to this artifact, including domain and used-based restrictions: checkbox

Open-Instruct ShareGPT 65B

This model is a 65B LLaMa model finetuned on the ShareGPT dataset (cleaned in a similar manner to Vicuna). Please note this is a model diff - see below for usage instructions.

This was trained as part of the paper How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources. The codebase used to train and evaluate this model can be found at https://github.com/allenai/open-instruct.

This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt). The licenses can be found in our codebase - see tulu_license.txt for the model license and llama_license.txt for the Llama license.

Usage

We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here: https://huggingface.co/docs/transformers/main/model_doc/llama

Clone https://github.com/allenai/open-instruct and install the required dependencies, or just copy scripts/weight_diff.py and install the minimal requirements listed in weight-diff-requirements.txt. Then download or clone this model diff to the same machine.

Then, run:

python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}

And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.

Input Format

The model is trained to use the following format (note the newlines):

<|user|>
Your message here!
<|assistant|>

For best results, format all inputs in this manner. Make sure to include a newline after <|assistant|>, this can affect generation quality quite a bit.

Performance

Here is the performance of this model across benchmarks explored in our paper How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources:

MMLU 0-shot MMLU 5-shot GSM Direct GSM CoT BBH Direct BBH CoT TydiQA Gold-Passage TydiQA Closed-book Codex-Eval Pass@1 Codex-Eval Pass@10 AlpacaFarm vs Davinci-003 Average
61.5 62.8 14.5 42.0 42.4 52.1 33.5 9.5 29.9 54.0 72.8 45.6

If you use this model, please cite our work and the llama paper:

@misc{wang2023far,
      title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources}, 
      author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
      year={2023},
      eprint={2306.04751},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@misc{touvron2023llama,
      title={LLaMA: Open and Efficient Foundation Language Models}, 
      author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
      year={2023},
      eprint={2302.13971},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}