fblgit commited on
Commit
a67ca9a
1 Parent(s): 8259969

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -26
README.md CHANGED
@@ -3,7 +3,7 @@ license: apache-2.0
3
  datasets:
4
  - fblgit/tree-of-knowledge
5
  - Open-Orca/SlimOrca-Dedup
6
- - HuggingFaceH4/ultrafeedback_binarized
7
  library_name: transformers
8
  tags:
9
  - juanako
@@ -15,18 +15,20 @@ tags:
15
  # Model Card for una-cybertron-7b-v2-bf16 (UNA: Uniform Neural Alignment)
16
 
17
  We strike back, introducing **Cybertron 7B v2** a 7B MistralAI based model, best on it's series. Trained on SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets.
18
- He scores more than **64.60**+ on HF LeaderBoard at least, we'll update the final test soon, .. and we have in the oven a few surprises for all the christmas, subscribe.
19
 
20
- * v1 Scoring **#1** at 2 December 2023 with 64.60
21
- * v2 Scoring **?** ..?
22
 
23
 
24
  | Model | Average | ARC (25-s) | HellaSwag (10-s) | MMLU (5-s) | TruthfulQA (MC) (0-s) | Winogrande (5-s) | GSM8K (5-s) |
25
  | --- | --- | --- | --- | --- | --- | --- | --- |
26
  | [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 60.97 | 59.98 | 83.31 | 64.16 | 42.15 | 78.37 | 37.83 |
 
 
27
  | [perlthoughts/Chupacabra-7B-v2](https://huggingface.co/perlthoughts/Chupacabra-7B-v2) | 63.54 | 66.47 | 85.17 | 64.49 | 57.6 | 79.16 | 28.35 |
28
- | [fblgit/una-cybertron-7b-v1-fp16](https://huggingface.co/fblgit/una-cybertron-7b-v1-fp16) | **64.60** | **68.17** | 85.14 | 62.07 | **63.98** | **80.9** | 27.34 |
29
- | [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) | **6?.?0** | **68.17** | **85.?4** | 62.07 | **6?.98** | **80.9** | **?0.34** |
30
 
31
  The model excels in mathematics, logic, reasoning, overall very smart.
32
 
@@ -74,26 +76,6 @@ Question:Explain QKV
74
  Answer:
75
  ```
76
 
77
- ## Evaluation (UNA-Cybertron-7B-v1-fp16)
78
- ```
79
- | Tasks |Version|Shots | Metric |Value | |Stderr|
80
- |--------------|-------|------|--------|-----:|---|-----:|
81
- |arc_challenge | | 25 |acc_norm|0.6817|± |0.0136|
82
- |truthfulqa_mc2| | 0 |acc |0.6398|± |0.0151|
83
- |hellaswag | | 10 |acc_norm|0.8492|± |0.0036|
84
- |winogrande | | 0 |acc |0.809 |± |0.011 |
85
- |gsm8k | | 5 |acc |0.2733|± |0.0137|
86
- |mmlu | | 5 |acc |0.6207|± |0.1230|
87
- | |average| |acc |0.6456| | |
88
-
89
- | Groups |Version|Filter|n-shot|Metric|Value | |Stderr|
90
- |------------------|-------|------|-----:|------|-----:|---|-----:|
91
- |mmlu |N/A |none | 0|acc |0.6207|_ |0.1230|
92
- | - humanities |N/A |none | 5|acc |0.5675|_ |0.1125|
93
- | - other |N/A |none | 5|acc |0.6933|_ |0.1108|
94
- | - social_sciences|N/A |none | 5|acc |0.7270|_ |0.0666|
95
- | - stem |N/A |none | 5|acc |0.5249|_ |0.1311|
96
- ```
97
 
98
  ### Framework versions
99
 
 
3
  datasets:
4
  - fblgit/tree-of-knowledge
5
  - Open-Orca/SlimOrca-Dedup
6
+ - allenai/ultrafeedback_binarized_cleaned
7
  library_name: transformers
8
  tags:
9
  - juanako
 
15
  # Model Card for una-cybertron-7b-v2-bf16 (UNA: Uniform Neural Alignment)
16
 
17
  We strike back, introducing **Cybertron 7B v2** a 7B MistralAI based model, best on it's series. Trained on SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets.
18
+ He scores [EXACTLY](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__una-cybertron-7b-v2-bf16) **#1** with **69.67**+ score on HF LeaderBoard board, **#8** ALL SIZES top score.
19
 
20
+ c* v1 Scoring **#1** at 2 December 2023 with 69.49
21
+ * v2 Scoring **#1** at 5 December 2023 with 69.43
22
 
23
 
24
  | Model | Average | ARC (25-s) | HellaSwag (10-s) | MMLU (5-s) | TruthfulQA (MC) (0-s) | Winogrande (5-s) | GSM8K (5-s) |
25
  | --- | --- | --- | --- | --- | --- | --- | --- |
26
  | [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 60.97 | 59.98 | 83.31 | 64.16 | 42.15 | 78.37 | 37.83 |
27
+ | [Intel/neural-chat-7b-v3-2](https://huggingface.co/Intel/neural-chat-7b-v3-2) | 68.29 | 67.49 | 83.92 | 63.55 | 59.68 | 79.95 | 55.12 |
28
+ | [Intel/neural-chat-7b-v3-2](https://huggingface.co/chargoddard/loyal-piano-m7) | 68.29 | 67.49 | 83.92 | 63.55 | 59.68 | 79.95 | 55.12 |
29
  | [perlthoughts/Chupacabra-7B-v2](https://huggingface.co/perlthoughts/Chupacabra-7B-v2) | 63.54 | 66.47 | 85.17 | 64.49 | 57.6 | 79.16 | 28.35 |
30
+ | [fblgit/una-cybertron-7b-v1-fp16](https://huggingface.co/fblgit/una-cybertron-7b-v1-fp16) | **69.49** | **68.43** | **85.85** | 63.34 | **63.28** | **80.90** | **55.12** |
31
+ | [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) | **69.67** | **68.26** | **85.?4** | 63.23 | **64.63** | **81.37** | **55.04** |
32
 
33
  The model excels in mathematics, logic, reasoning, overall very smart.
34
 
 
76
  Answer:
77
  ```
78
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
 
80
  ### Framework versions
81