teknium commited on
Commit
9a5e304
1 Parent(s): 576f186

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md CHANGED
@@ -1,3 +1,76 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: mit
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: NousResearch/Llama-2-7b-hf
3
+ tags:
4
+ - llama-2
5
+ - instruct
6
+ - finetune
7
+ - alpaca
8
+ - gpt4
9
+ - synthetic data
10
+ - distillation
11
+ datasets:
12
+ - teknium/openhermes
13
+ model-index:
14
+ - name: openhermes-7b
15
+ results: []
16
  license: mit
17
+ language:
18
+ - en
19
  ---
20
+
21
+ # OpenHermes-7B-adapter
22
+
23
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ovkrkIIUwJ9azhPtW6dAb.png)
24
+
25
+ ## Model description
26
+
27
+ ** ADAPTER ONLY VERSION **
28
+
29
+ OpenHermes 7B is the first fine tune of the Hermes dataset that has a fully open source dataset!
30
+
31
+ What is unique about this 7B model is that it used sample packing, which speeds up training by many multiples if the dataset token averages arent near the max sequence length.
32
+
33
+ OpenHermes was trained on 242,000 entries of primarily GPT-4 generated data, from open datasets across the AI landscape, including:
34
+
35
+ - GPTeacher - General Instruct, Roleplay v1, Roleplay v2, and Code Instruct Datasets, by Teknium
36
+ - WizardLM (v1, evol_instruct 70k), by WizardLM Team/nlpxucan
37
+ - Airoboros GPT-4 (v1.0), by JonDurbin
38
+ - Camel-AI's domain expert datasets, by the Camel-AI Team
39
+ - CodeAlpaca, by Sahil2801
40
+ - GPT4-LLM and Unnatural Instructions, by Microsoft
41
+
42
+ Filtering included removal of OpenAI refusals, disclaimers, and "As an AI" type examples and more
43
+
44
+ The base dataset mix the model was trained on is identical to Nous-Hermes', minus the Nous-Instruct and PDACTL datasets which were private datasets.
45
+
46
+ The WANDB Project is public and can be examined at this link: https://wandb.ai/teknium1/openhermes/runs/openhermes-v2-qlora-7b-packed
47
+
48
+ Huge thank you to [main_horse](https://twitter.com/main_horse) for compute access and a16z for sponsoring my work, and all the dataset creators and other people who's work has contributed to this project!
49
+
50
+ ## Benchmark Information
51
+
52
+ ## Benchmark Results
53
+
54
+ GPT-4All Benchmark Set
55
+ ```
56
+ | Task |Version| Metric |Value | |Stderr|
57
+ |-------------|------:|--------|-----:|---|-----:|
58
+ |arc_challenge| 0|acc |0.4727|± |0.0146|
59
+ | | |acc_norm|0.4957|± |0.0146|
60
+ |arc_easy | 0|acc |0.7862|± |0.0084|
61
+ | | |acc_norm|0.7643|± |0.0087|
62
+ |boolq | 1|acc |0.7801|± |0.0072|
63
+ |hellaswag | 0|acc |0.5789|± |0.0049|
64
+ | | |acc_norm|0.7654|± |0.0042|
65
+ |openbookqa | 0|acc |0.3480|± |0.0213|
66
+ | | |acc_norm|0.4500|± |0.0223|
67
+ |piqa | 0|acc |0.7867|± |0.0096|
68
+ | | |acc_norm|0.7938|± |0.0094|
69
+ |winogrande | 0|acc |0.7048|± |0.0128|
70
+
71
+ Average: 0.679
72
+ ```
73
+
74
+ ## Training procedure
75
+
76
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/Vzy7Z4Qcwj4hGJcQ2BT20.png)