abideen commited on
Commit
afe7dee
1 Parent(s): c3b37bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md CHANGED
@@ -5,6 +5,9 @@ tags:
5
  - merge
6
  - abideen/NexoNimbus-7B
7
  - mlabonne/NeuralMarcoro14-7B
 
 
 
8
  ---
9
 
10
  # NexoNimbus-MoE-2x7B
@@ -15,6 +18,32 @@ NexoNimbus-MoE-2x7B is a Mixure of Experts (MoE) made with the following models:
15
  * [abideen/NexoNimbus-7B](https://huggingface.co/abideen/NexoNimbus-7B)
16
  * [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ## 🧩 Configuration
19
 
20
  ```yaml
@@ -64,6 +93,8 @@ experts:
64
 
65
  ## 💻 Usage
66
 
 
 
67
  ```python
68
  !pip install -qU transformers bitsandbytes accelerate
69
 
 
5
  - merge
6
  - abideen/NexoNimbus-7B
7
  - mlabonne/NeuralMarcoro14-7B
8
+ language:
9
+ - en
10
+ library_name: transformers
11
  ---
12
 
13
  # NexoNimbus-MoE-2x7B
 
18
  * [abideen/NexoNimbus-7B](https://huggingface.co/abideen/NexoNimbus-7B)
19
  * [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)
20
 
21
+ 🏆 Evaluation
22
+ NexoNimbus-MoE-2x7B is the 10th best-performing 13B LLM on the Open LLM Leaderboard:
23
+
24
+
25
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e380b2e12618b261fa6ba0/z8E728H5fJqVtKNeGuwjX.png)
26
+
27
+
28
+ | Task |Version| Metric |Value| |Stderr|
29
+ |-------------|------:|--------|----:|---|-----:|
30
+ |arc_challenge| 0|acc |68.25|± | 1.36|
31
+ | | |acc_norm|70.81|± | 1.38|
32
+ |hellaswag | 0|acc |70.86|± | 0.45|
33
+ | | |acc_norm|87.86|± | 0.32|
34
+ |gsm8k | 0|acc |70.35|± | 1.25|
35
+ |winogrande | 0|acc |84.84|± | 1.00|
36
+ |mmlu | 0|acc |64.69|± | 1.00|
37
+
38
+ Average: 73.5%
39
+
40
+ ### TruthfulQA
41
+ | Task |Version|Metric|Value| |Stderr|
42
+ |-------------|------:|------|----:|---|-----:|
43
+ |truthfulqa_mc| 1|mc1 |46.26|± | 1.74|
44
+ | | |mc2 |62.42|± | 1.54|
45
+
46
+
47
  ## 🧩 Configuration
48
 
49
  ```yaml
 
93
 
94
  ## 💻 Usage
95
 
96
+ Here's a [Colab notebook](https://colab.research.google.com/drive/1F9lzL1IeZRMgiSbY9UbgCR__RreIflJh?usp=sharing) to run NexoNimbus-MoE-2x7B in 4-bit precision on a free T4 GPU.
97
+
98
  ```python
99
  !pip install -qU transformers bitsandbytes accelerate
100