--- license: apache-2.0 inference: false model-index: - name: vicuna-7b-v1.3-attention-sparsity-10 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 52.22 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wang7776/vicuna-7b-v1.3-attention-sparsity-10 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 77.05 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wang7776/vicuna-7b-v1.3-attention-sparsity-10 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 47.93 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wang7776/vicuna-7b-v1.3-attention-sparsity-10 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 46.87 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wang7776/vicuna-7b-v1.3-attention-sparsity-10 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 69.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wang7776/vicuna-7b-v1.3-attention-sparsity-10 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 13.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wang7776/vicuna-7b-v1.3-attention-sparsity-10 name: Open LLM Leaderboard --- # Overview This model has been pruned to 10% sparsity using the [Wanda pruning method](https://arxiv.org/abs/2306.11695) on attention layers. This method requires no retraining or weight updates and still achieves competitive performance. A link to the base model can be found [here](https://huggingface.co/lmsys/vicuna-7b-v1.3). # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture. - **License:** Non-commercial license - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model - Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights. - APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api. ## Training Details Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning. The training data is around 125K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_wang7776__vicuna-7b-v1.3-attention-sparsity-10) | Metric |Value| |---------------------------------|----:| |Avg. |51.13| |AI2 Reasoning Challenge (25-Shot)|52.22| |HellaSwag (10-Shot) |77.05| |MMLU (5-Shot) |47.93| |TruthfulQA (0-shot) |46.87| |Winogrande (5-shot) |69.53| |GSM8k (5-shot) |13.19|