Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: mistralai/Mistral-7b-V0.1
|
3 |
+
tags:
|
4 |
+
- llama-2
|
5 |
+
- instruct
|
6 |
+
- finetune
|
7 |
+
- alpaca
|
8 |
+
- gpt4
|
9 |
+
- synthetic data
|
10 |
+
- distillation
|
11 |
+
datasets:
|
12 |
+
- jondurbin/airoboros-2.2.1
|
13 |
+
model-index:
|
14 |
+
- name: airoboros2.2-mistral-7b
|
15 |
+
results: []
|
16 |
+
license: mit
|
17 |
+
language:
|
18 |
+
- en
|
19 |
+
---
|
20 |
+
|
21 |
+
Mistral trained with the airoboros dataset!
|
22 |
+
|
23 |
+
Actual dataset is airoboros 2.2, but it seems to have been replaced on hf with 2.2.1.
|
24 |
+
|
25 |
+
TruthfulQA:
|
26 |
+
|
27 |
+
```
|
28 |
+
hf-causal-experimental (pretrained=/home/teknium/dakota/lm-evaluation-harness/airoboros2.2-mistral/,dtype=float16), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
|
29 |
+
| Task |Version|Metric|Value | |Stderr|
|
30 |
+
|-------------|------:|------|-----:|---|-----:|
|
31 |
+
|truthfulqa_mc| 1|mc1 |0.3562|± |0.0168|
|
32 |
+
| | |mc2 |0.5217|± |0.0156|
|
33 |
+
```
|
34 |
+
|
35 |
+
More info to come
|