Files changed (1) hide show
  1. README.md +118 -2
README.md CHANGED
@@ -1,12 +1,115 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - fr
5
  - it
6
  - de
7
  - es
8
  - en
 
9
  inference: false
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
  # Model Card for Mixtral-8x7B-Laser
12
  LaserRMT version of the Mixtral-8x7b-Instruct by Fernando Fernandes Neto @ Cognitive Computations
@@ -138,4 +241,17 @@ It does not have any moderation mechanisms. We're looking forward to engaging wi
138
  make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
139
 
140
  # The Mistral AI Team
141
- Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  language:
3
  - fr
4
  - it
5
  - de
6
  - es
7
  - en
8
+ license: apache-2.0
9
  inference: false
10
+ model-index:
11
+ - name: mixtral-instruct-0.1-laser
12
+ results:
13
+ - task:
14
+ type: text-generation
15
+ name: Text Generation
16
+ dataset:
17
+ name: AI2 Reasoning Challenge (25-Shot)
18
+ type: ai2_arc
19
+ config: ARC-Challenge
20
+ split: test
21
+ args:
22
+ num_few_shot: 25
23
+ metrics:
24
+ - type: acc_norm
25
+ value: 70.48
26
+ name: normalized accuracy
27
+ source:
28
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/mixtral-instruct-0.1-laser
29
+ name: Open LLM Leaderboard
30
+ - task:
31
+ type: text-generation
32
+ name: Text Generation
33
+ dataset:
34
+ name: HellaSwag (10-Shot)
35
+ type: hellaswag
36
+ split: validation
37
+ args:
38
+ num_few_shot: 10
39
+ metrics:
40
+ - type: acc_norm
41
+ value: 87.28
42
+ name: normalized accuracy
43
+ source:
44
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/mixtral-instruct-0.1-laser
45
+ name: Open LLM Leaderboard
46
+ - task:
47
+ type: text-generation
48
+ name: Text Generation
49
+ dataset:
50
+ name: MMLU (5-Shot)
51
+ type: cais/mmlu
52
+ config: all
53
+ split: test
54
+ args:
55
+ num_few_shot: 5
56
+ metrics:
57
+ - type: acc
58
+ value: 71.07
59
+ name: accuracy
60
+ source:
61
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/mixtral-instruct-0.1-laser
62
+ name: Open LLM Leaderboard
63
+ - task:
64
+ type: text-generation
65
+ name: Text Generation
66
+ dataset:
67
+ name: TruthfulQA (0-shot)
68
+ type: truthful_qa
69
+ config: multiple_choice
70
+ split: validation
71
+ args:
72
+ num_few_shot: 0
73
+ metrics:
74
+ - type: mc2
75
+ value: 65.83
76
+ source:
77
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/mixtral-instruct-0.1-laser
78
+ name: Open LLM Leaderboard
79
+ - task:
80
+ type: text-generation
81
+ name: Text Generation
82
+ dataset:
83
+ name: Winogrande (5-shot)
84
+ type: winogrande
85
+ config: winogrande_xl
86
+ split: validation
87
+ args:
88
+ num_few_shot: 5
89
+ metrics:
90
+ - type: acc
91
+ value: 80.82
92
+ name: accuracy
93
+ source:
94
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/mixtral-instruct-0.1-laser
95
+ name: Open LLM Leaderboard
96
+ - task:
97
+ type: text-generation
98
+ name: Text Generation
99
+ dataset:
100
+ name: GSM8k (5-shot)
101
+ type: gsm8k
102
+ config: main
103
+ split: test
104
+ args:
105
+ num_few_shot: 5
106
+ metrics:
107
+ - type: acc
108
+ value: 58.68
109
+ name: accuracy
110
+ source:
111
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/mixtral-instruct-0.1-laser
112
+ name: Open LLM Leaderboard
113
  ---
114
  # Model Card for Mixtral-8x7B-Laser
115
  LaserRMT version of the Mixtral-8x7b-Instruct by Fernando Fernandes Neto @ Cognitive Computations
 
241
  make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
242
 
243
  # The Mistral AI Team
244
+ Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
245
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
246
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cognitivecomputations__mixtral-instruct-0.1-laser)
247
+
248
+ | Metric |Value|
249
+ |---------------------------------|----:|
250
+ |Avg. |72.36|
251
+ |AI2 Reasoning Challenge (25-Shot)|70.48|
252
+ |HellaSwag (10-Shot) |87.28|
253
+ |MMLU (5-Shot) |71.07|
254
+ |TruthfulQA (0-shot) |65.83|
255
+ |Winogrande (5-shot) |80.82|
256
+ |GSM8k (5-shot) |58.68|
257
+