Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- fblgit/tree-of-knowledge
|
5 |
+
- Open-Orca/SlimOrca-Dedup
|
6 |
+
- HuggingFaceH4/ultrafeedback_binarized
|
7 |
+
library_name: transformers
|
8 |
+
tags:
|
9 |
+
- juanako
|
10 |
+
- UNA
|
11 |
+
- Mistral
|
12 |
+
---
|
13 |
+
# Model Card for una-cybertron-7b-v1 (UNA: Uniform Neural Alignment)
|
14 |
+
|
15 |
+
We strike back, introducing **Cybertron 7B** a Mistral based model best on it's series. Trained on SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets.
|
16 |
+
He scores **64.56** on HF LeaderTests (without DROP for now).
|
17 |
+
| Model | Average ⬆️| ARC (25-s) ⬆️ | HellaSwag (10-s) ⬆️ | MMLU (5-s) ⬆️| TruthfulQA (MC) (0-s) ⬆️ | Winogrande (5-s) | GSM8K (5-s) | DROP (3-s) |
|
18 |
+
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
19 |
+
| [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 60.97 | 59.98 | 83.31 | 64.16 | 42.15 | 78.37 | 37.83 |
|
20 |
+
| [perlthoughts/Chupacabra-7B-v2](https://huggingface.co/perlthoughts/Chupacabra-7B-v2) | 63.54 | 66.47 | 85.17 | 64.49 | 57.6 | 79.16 | 28.35 |
|
21 |
+
| [fblgit/una-cybertron-7b-v1](https://huggingface.co/fblgit/una-cybertron-7b-v1) | **64.56** | **68.17** | 84.92 | 62.07 | **63.98** | **80.9** | 27.33 |
|
22 |
+
The model excels in mathematics, logic, reasoning, overall very smart.
|
23 |
+
|
24 |
+
## Model Details
|
25 |
+
|
26 |
+
Adiestrated with UNA: Uniform Neural Alignment technique (paper going out soon)
|
27 |
+
|
28 |
+
### Model Description
|
29 |
+
|
30 |
+
- **Developed by:** [juanako.ai](https://juanako.ai)
|
31 |
+
- **Author:** [Xavier M.](xavi@juanako.ai)
|
32 |
+
- **Model type:** MistralAI 7B
|
33 |
+
- **Funded by Cybertro H100's**
|
34 |
+
|
35 |
+
### Prompt
|
36 |
+
The model is very good, works well on almost any prompt but ChatML format and Alpaca System gets the best
|
37 |
+
```
|
38 |
+
<|im_start|>system
|
39 |
+
- You are a helpful assistant chatbot trained by MosaicML.
|
40 |
+
- You answer questions.
|
41 |
+
- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
|
42 |
+
- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>
|
43 |
+
<|im_start|>user
|
44 |
+
Explain QKV<|im_end|>
|
45 |
+
<|im_start|>assistant
|
46 |
+
```
|
47 |
+
```
|
48 |
+
### Assistant: I am StableVicuna, a large language model created by CarperAI. I am here to chat!
|
49 |
+
|
50 |
+
### Human: Explain QKV
|
51 |
+
### Assistant:
|
52 |
+
```
|
53 |
+
```
|
54 |
+
[Round <|round|>]
|
55 |
+
问:Explain QKV
|
56 |
+
答:
|
57 |
+
```
|
58 |
+
```
|
59 |
+
[Round <|round|>]
|
60 |
+
Question:Explain QKV
|
61 |
+
Answer:
|
62 |
+
```
|
63 |
+
```
|
64 |
+
Question:Explain QKV
|
65 |
+
Answer:
|
66 |
+
```
|
67 |
+
|
68 |
+
## Evaluation
|
69 |
+
```
|
70 |
+
| Tasks |Version|Shots | Metric |Value | |Stderr|
|
71 |
+
|--------------|-------|------|--------|-----:|---|-----:|
|
72 |
+
|arc_challenge | | 25 |acc_norm|0.6817|± |0.0136|
|
73 |
+
|truthfulqa_mc2| | 0 |acc |0.6398|± |0.0151|
|
74 |
+
|hellaswag | | 10 |acc_norm|0.8492|± |0.0036|
|
75 |
+
|winogrande | | 0 |acc |0.809 |± |0.011 |
|
76 |
+
|gsm8k | | 5 |acc |0.2733|± |0.0137|
|
77 |
+
|mmlu | | 5 |acc |0.6207|± |0.1230|
|
78 |
+
| |average| |acc |0.6456| | |
|
79 |
+
|
80 |
+
| Groups |Version|Filter|n-shot|Metric|Value | |Stderr|
|
81 |
+
|------------------|-------|------|-----:|------|-----:|---|-----:|
|
82 |
+
|mmlu |N/A |none | 0|acc |0.6207|_ |0.1230|
|
83 |
+
| - humanities |N/A |none | 5|acc |0.5675|_ |0.1125|
|
84 |
+
| - other |N/A |none | 5|acc |0.6933|_ |0.1108|
|
85 |
+
| - social_sciences|N/A |none | 5|acc |0.7270|_ |0.0666|
|
86 |
+
| - stem |N/A |none | 5|acc |0.5249|_ |0.1311|
|
87 |
+
```
|
88 |
+
|
89 |
+
### Framework versions
|
90 |
+
|
91 |
+
- Transformers 4.35.0-UNA
|
92 |
+
- Pytorch 2.1.0
|
93 |
+
- Datasets 2.14.6
|
94 |
+
- Tokenizers 0.14.1
|
95 |
+
|
96 |
+
### Citations
|
97 |
+
If you find Cybertron, Juanako or any of our models useful, specially if you use it for your big brand.. cite please:
|
98 |
+
```
|
99 |
+
@misc{unacybertron7a,
|
100 |
+
title={Cybertron: Uniform Neural Alignment},
|
101 |
+
author={Xavier Murias},
|
102 |
+
year={2023},
|
103 |
+
publisher = {HuggingFace},
|
104 |
+
journal = {HuggingFace repository},
|
105 |
+
howpublished = {\url{https://huggingface.co/fblgit/una-cybertron-7b-v1}},
|
106 |
+
}
|
107 |
+
```
|