--- language: - en license: other datasets: - OEvortex/vortex-mini - yahma/alpaca-cleaned license_name: tongyi-qianwen-research license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE pipeline_tag: text-generation model-index: - name: Qwen1.5-0.5B-vortex-v2 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 30.63 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Abhaykoul/Qwen1.5-0.5B-vortex-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 45.54 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Abhaykoul/Qwen1.5-0.5B-vortex-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 36.29 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Abhaykoul/Qwen1.5-0.5B-vortex-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 44.29 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Abhaykoul/Qwen1.5-0.5B-vortex-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 56.04 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Abhaykoul/Qwen1.5-0.5B-vortex-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 5.91 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Abhaykoul/Qwen1.5-0.5B-vortex-v2 name: Open LLM Leaderboard --- # Qwen1.5-0.5B-vortex-v2 model card Qwen1.5-0.5B-vortex-v2 is a dealigned chat finetune of the original fantastic Qwen1.5-0.5B model by the Qwen team. This model was trained on the Vortex mini dataset and alpaca-cleaned using axolotl for 4 epoch # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Abhaykoul__Qwen1.5-0.5B-vortex-v2) | Metric |Value| |---------------------------------|----:| |Avg. |36.45| |AI2 Reasoning Challenge (25-Shot)|30.63| |HellaSwag (10-Shot) |45.54| |MMLU (5-Shot) |36.29| |TruthfulQA (0-shot) |44.29| |Winogrande (5-shot) |56.04| |GSM8k (5-shot) | 5.91|