--- language: - en license: apache-2.0 library_name: transformers tags: - chat pipeline_tag: text-generation model-index: - name: Experiment23-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.35 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=yam-peleg/Experiment23-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.77 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=yam-peleg/Experiment23-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.17 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=yam-peleg/Experiment23-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 78.87 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=yam-peleg/Experiment23-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 85.32 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=yam-peleg/Experiment23-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 62.4 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=yam-peleg/Experiment23-7B name: Open LLM Leaderboard --- **Experiment23-7B** An experiment for testing and refining a specific training and evaluation pipeline research framework. This experiment aims to identify potential optimizations, focusing on data engineering, architecture efficiency, and evaluation performance. The goal is to evaluate the effectiveness of a new training / evaluation pipeline for LLMs. The experiment will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement. More details in the future experiments. --- license: apache-2.0 --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_yam-peleg__Experiment23-7B) | Metric |Value| |---------------------------------|----:| |Avg. |75.31| |AI2 Reasoning Challenge (25-Shot)|72.35| |HellaSwag (10-Shot) |88.77| |MMLU (5-Shot) |64.17| |TruthfulQA (0-shot) |78.87| |Winogrande (5-shot) |85.32| |GSM8k (5-shot) |62.40|