--- license: unknown model-index: - name: contaminated_proof_7b_v1.0_safetensor results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 78.07 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Contamination/contaminated_proof_7b_v1.0_safetensor name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 90.22 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Contamination/contaminated_proof_7b_v1.0_safetensor name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 78.92 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Contamination/contaminated_proof_7b_v1.0_safetensor name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 82.29 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Contamination/contaminated_proof_7b_v1.0_safetensor name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 88.16 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Contamination/contaminated_proof_7b_v1.0_safetensor name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.14 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Contamination/contaminated_proof_7b_v1.0_safetensor name: Open LLM Leaderboard --- #### This model has same weights with [Contamination/contaminated_proof_7b_v1.0](https://huggingface.co/Contamination/contaminated_proof_7b_v1.0) # WARNING: Contamination This model is TOTALLY CONTAMINATED, which made resulting model unreliable. SO DO NOT USE THIS MODEL FOR ANY PURPOSE. PLEASE ONLY USE FOR REFERENCE. This model is trained with [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) data to have conversational features. # MODEL ARCHITECTURE This model was initialized with [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1/tree/main) # PLEASE NOTE Users and sponsors should be wary that many models are also unreliable. I hope our model can show the vulnerablity of the leaderboard. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Contamination__contaminated_proof_7b_v1.0_safetensor) | Metric |Value| |---------------------------------|----:| |Avg. |81.14| |AI2 Reasoning Challenge (25-Shot)|78.07| |HellaSwag (10-Shot) |90.22| |MMLU (5-Shot) |78.92| |TruthfulQA (0-shot) |82.29| |Winogrande (5-shot) |88.16| |GSM8k (5-shot) |69.14|