--- model-index: - name: codetulu-2-7b results: [] datasets: - allenai/tulu-v2-sft-mixture language: - en base_model: codellama/CodeLlama-7b-hf --- TuluV2 banner # Model Card for Codetulu 2 7B Tulu is a series of language models that are trained to act as helpful assistants. Codetulu 2 7B is a fine-tuned version of Codellama that was trained on a mix of publicly available, synthetic and human datasets. Check out our paper [LINK TODO](https://google.com) for more details! ## Model description - **Model type:** A model belonging to a suite of instruction and RLHF tuned chat models on a mix of publicly available, synthetic and human-created datasets. - **Language(s) (NLP):** Primarily English - **License:** [AI2 ImpACT](https://allenai.org/impact-license) Low-risk license. - **Finetuned from model:** [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) ### Model Sources - **Repository:** https://github.com/allenai/https://github.com/allenai/open-instruct - **Model Family:** Other models and the dataset are found in the [Tulu V2 collection](https://huggingface.co/collections/allenai/tulu-v2-suite-6551b56e743e6349aab45101). ## Intended uses & limitations The model was fine-tuned on a filtered and preprocessed of the [Tulu V2 mix dataset](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs. ## Bias, Risks, and Limitations The Tulu models have not been aligned to generate safe completions within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base Llama 2 models, however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this. ### Training hyperparameters The following hyperparameters were used during finetuning: - learning_rate: 2e-5 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2.0 ## Citation If you find Tulu 2 is useful in your work, please cite it with: ``` @misc{ivison2023changing, title={Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2}, author={Hamish Ivison and Yizhong Wang and Valentina Pyatkin and Nathan Lambert and Matthew Peters and Pradeep Dasigi and Joel Jang and David Wadden and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi}, year={2023}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` *Model card adapted from [Zephyr Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta/blob/main/README.md)*