--- language: - en pipeline_tag: text-classification tags: - llama-2 --- This is a Llama-2 version of [Guanaco](https://huggingface.co/timdettmers/guanaco-7b). It was finetuned from the base [Llama-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) model using the official training scripts found in the [QLoRA repo](https://github.com/artidoro/qlora). I wanted it to be as faithful as possible and therefore changed nothing in the training script beyond the model it was pointing to. The model prompt is therefore also the same as the original Guanaco model. This repo contains the merged f16 model. The QLoRA adaptor can be found [here](https://huggingface.co/Mikael110/llama-2-7b-guanaco-qlora). A 13b version of the model can be found [here](https://huggingface.co/Mikael110/llama-2-13b-guanaco-fp16). **Legal Disclaimer: This model is bound by the usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.** # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Mikael110__llama-2-7b-guanaco-fp16) | Metric | Value | |-----------------------|---------------------------| | Avg. | 44.6 | | ARC (25-shot) | 54.86 | | HellaSwag (10-shot) | 79.65 | | MMLU (5-shot) | 46.38 | | TruthfulQA (0-shot) | 43.83 | | Winogrande (5-shot) | 75.22 | | GSM8K (5-shot) | 6.29 | | DROP (3-shot) | 5.99 |