winglian commited on
Commit
cb03cdd
1 Parent(s): 70fde0f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -1
README.md CHANGED
@@ -5,4 +5,92 @@ language:
5
  - en
6
  library_name: transformers
7
  pipeline_tag: text-generation
8
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - en
6
  library_name: transformers
7
  pipeline_tag: text-generation
8
+ ---
9
+
10
+ <p><h1>🐋 TBD 🐋</h1></p>
11
+
12
+
13
+ ![OpenOrca Logo](https://huggingface.co/datasets/Open-Orca/OpenOrca/resolve/main/OpenOrcaLogo.png "OpenOrca Logo")
14
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
15
+
16
+
17
+ # OpenOrca - Mistral - 7B - 8k
18
+
19
+ We have used our own [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca) to fine-tune on top of [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1).
20
+ This dataset is our attempt to reproduce the dataset generated for Microsoft Research's [Orca Paper](https://arxiv.org/abs/2306.02707).
21
+ We use [OpenChat](https://huggingface.co/openchat) packing, trained with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
22
+
23
+ This release is trained on a curated filtered subset of most of our GPT-4 augmented data.
24
+ It is the same subset of our data as was used in our [OpenOrcaxOpenChat-Preview2-13B model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B).
25
+
26
+ HF Leaderboard evals place this model as #2 for all models smaller than 30B at release time, outperforming all but one 13B model.
27
+
28
+ TBD
29
+
30
+ Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
31
+
32
+ [<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
33
+
34
+
35
+ We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners.
36
+
37
+ We will also give sneak-peak announcements on our Discord, which you can find here:
38
+
39
+ https://AlignmentLab.ai
40
+
41
+ or on the OpenAccess AI Collective Discord for more information about Axolotl trainer here:
42
+
43
+ https://discord.gg/5y8STgB3P3
44
+
45
+ # Prompt Template
46
+
47
+ We used [OpenAI's Chat Markup Language (ChatML)](https://github.com/openai/openai-python/blob/main/chatml.md) format, with `<|im_start|>` and `<|im_end|>` tokens added to support this.
48
+
49
+ ## Example Prompt Exchange
50
+
51
+ TBD
52
+
53
+ # Evaluation
54
+
55
+ We have evaluated using the methodology and tools for the HuggingFace Leaderboard, and find that we have significantly improved upon the base model.
56
+
57
+ TBD
58
+
59
+ ## HuggingFaceH4 Open LLM Leaderboard Performance
60
+
61
+ TBD
62
+
63
+ ## GPT4ALL Leaderboard Performance
64
+
65
+ TBD
66
+
67
+ # Dataset
68
+
69
+ We used a curated, filtered selection of most of the GPT-4 augmented data from our OpenOrca dataset, which aims to reproduce the Orca Research Paper dataset.
70
+
71
+
72
+ # Training
73
+
74
+ We trained with 8x A6000 GPUs for 62 hours, completing 4 epochs of full fine tuning on our dataset in one training run.
75
+ Commodity cost was ~$400.
76
+
77
+ # Citation
78
+
79
+ ```bibtex
80
+ @misc{mukherjee2023orca,
81
+ title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
82
+ author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
83
+ year={2023},
84
+ eprint={2306.02707},
85
+ archivePrefix={arXiv},
86
+ primaryClass={cs.CL}
87
+ }
88
+ @misc{longpre2023flan,
89
+ title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
90
+ author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
91
+ year={2023},
92
+ eprint={2301.13688},
93
+ archivePrefix={arXiv},
94
+ primaryClass={cs.AI}
95
+ }
96
+ ```