alpayariyak commited on
Commit
5789b46
1 Parent(s): d9afa57

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +131 -84
README.md CHANGED
@@ -1,65 +1,81 @@
1
- ---
2
- license: apache-2.0
3
- tags:
4
- - openchat
5
- - mistral
6
- - C-RLFT
7
- datasets:
8
- - openchat/openchat_sharegpt4_dataset
9
- - kaist-ai/Feedback-Collection
10
- - imone/OpenOrca_FLAN
11
- - LDJnr/LessWrong-Amplify-Instruct
12
- - LDJnr/Pure-Dove
13
- - LDJnr/Verified-Camel
14
- - tiedong/goat
15
- - glaiveai/glaive-code-assistant
16
- - meta-math/MetaMathQA
17
- - OpenAssistant/oasst_top1_2023-08-25
18
- - TIGER-Lab/MathInstruct
19
- library_name: transformers
20
- pipeline_tag: text-generation
21
- ---
22
 
23
  <div align="center">
24
  <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%">
 
25
  </div>
26
 
27
- <p align="center">
28
- <a href="https://github.com/imoneoi/openchat">GitHub Repo</a> •
29
- <a href="https://openchat.team">Online Demo</a>
30
- <a href="https://discord.gg/pQjnXvNKHY">Discord</a>
31
- <a href="https://twitter.com/imonenext">Twitter</a>
32
- <a href="https://huggingface.co/openchat">Huggingface</a> •
33
- <a href="https://arxiv.org/pdf/2309.11235.pdf">Paper</a>
 
 
 
 
 
 
 
 
 
 
34
  </p>
35
 
36
- # OpenChat 3.5: First Update Released on December 10th!
37
-
38
- **🚀 15-point improvement in coding performance**
39
-
40
- **💡 Introducing a coding & generalist mode and a mathematical reasoning mode**
41
-
42
- **🧑‍⚖️ Experimental support for evaluator and feedback capabilities**
43
-
44
- **🤖 Outperforms Grok-1 in 3/4 and ChatGPT (March) in 5/8 benchmarks**
 
 
 
 
 
 
 
45
 
46
- | Model | Size | HumanEval+ pass@1 |
47
- |-----------------------------|----------|------------|
48
- | ChatGPT (December 12, 2023) | - | 64.6 |
49
- | WizardCoder-Python-34B-V1.0 | 34B | 64.6 |
50
- | **OpenChat 3.5 (Dec 10)** | **7B** | **63.4** |
51
- | OpenHermes 2.5 | 7B | 41.5 |
 
 
 
 
 
 
 
 
52
 
53
  <div style="display: flex; justify-content: center; align-items: center">
54
- <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat.png" style="width: 45%;">
55
- <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat_grok.png" style="width: 45%;">
56
  </div>
57
 
58
- OpenChat is an innovative library of open-source language models, fine-tuned with [C-RLFT](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
 
 
59
 
60
- [![DOI](https://zenodo.org/badge/645397533.svg)](https://zenodo.org/badge/latestdoi/645397533)
61
 
62
- ## Usage
 
 
 
 
 
 
 
 
 
 
 
63
 
64
  To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command.
65
 
@@ -115,7 +131,7 @@ Math Correct User: 10.3 − 7988.8133=<|end_of_turn|>Math Correct Assistant:
115
 
116
  ⚠️ **Notice:** Remember to set `<|end_of_turn|>` as end of generation token.
117
 
118
- The default (GPT4 Correct) template is also available as the integrated `tokenizer.chat_template`,
119
  which can be used instead of manually specifying the template:
120
 
121
  ```python
@@ -128,8 +144,9 @@ tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
128
  assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
129
  ```
130
 
131
- ## 🧑‍⚖️ (Experimental) Evaluator / Feedback Capabilities
132
-
 
133
  We've included evaluator capabilities in this release to advance open-source models as evaluators. You can use `Default Mode (GPT4 Correct)` with the following prompt (same as [Prometheus](https://huggingface.co/datasets/kaist-ai/Feedback-Collection)) to evaluate a response.
134
 
135
  ```
@@ -159,19 +176,9 @@ Score 5: {orig_score5_description}
159
 
160
  ###Feedback:
161
  ```
162
-
163
- ## Comparison with [X.AI Grok models](https://x.ai/)
164
-
165
- | | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k |
166
- |-------------------|-------------|---------|----------|------|-----------|----------|----------|
167
- | OpenChat 3.5 1210 | Apache-2.0 | **7B** | **60.1** | 65.3 | **68.9** | **28.9** | **77.3** |
168
- | OpenChat 3.5 | Apache-2.0 | **7B** | 56.4 | 64.3 | 55.5 | 28.6 | **77.3** |
169
- | Grok-0 | Proprietary | 33B | 44.5 | 65.7 | 39.7 | 15.7 | 56.8 |
170
- | Grok-1 | Proprietary | ???B | 55.8 | 73 | 63.2 | 23.9 | 62.9 |
171
-
172
- *: Grok results are reported by [X.AI](https://x.ai/).
173
-
174
- ## <a id="benchmarks"></a> Benchmarks
175
 
176
  | Model | # Params | Average | MT-Bench | HumanEval | BBH MC | AGIEval | TruthfulQA | MMLU | GSM8K | BBH CoT |
177
  |--------------------|----------|----------|--------------|-----------------|----------|----------|---------------|--------------|--------------|-------------|
@@ -183,9 +190,8 @@ Score 5: {orig_score5_description}
183
  | OpenOrca Mistral | 7B | 52.7 | 6.86 | 38.4 | 49.4 | 42.9 | 45.9 | 59.3 | 59.1 | 58.1 |
184
  | Zephyr-β^ | 7B | 34.6 | 7.34 | 22.0 | 40.6 | 39.0 | 40.8 | 39.8 | 5.1 | 16.0 |
185
  | Mistral | 7B | - | 6.84 | 30.5 | 39.0 | 38.0 | - | 60.1 | 52.2 | - |
186
- | Open-source SOTA** | 13B-70B | 61.4 | 7.71 | 73.2 | 49.7 | 41.7 | 62.3 | 63.7 | 82.3 | 41.4 |
187
- | | | | WizardLM 70B | WizardCoder 34B | Orca 13B | Orca 13B | Platypus2 70B | WizardLM 70B | MetaMath 70B | Flan-T5 11B |
188
-
189
  *: ChatGPT (March) results are from [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), and our evaluation. Please note that ChatGPT is not a fixed baseline and evolves rapidly over time.
190
 
191
  ^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data.
@@ -193,15 +199,48 @@ Score 5: {orig_score5_description}
193
  **: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories.
194
 
195
  All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in [our repository](https://github.com/imoneoi/openchat/#benchmarks).
 
 
 
 
196
 
197
- ## Limitations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
198
 
199
  **Foundation Model Limitations**
200
  Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as:
201
 
202
- - Complex reasoning
203
- - Mathematical and arithmetic tasks
204
- - Programming and coding challenges
205
 
206
  **Hallucination of Non-existent Information**
207
  OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
@@ -210,25 +249,31 @@ OpenChat may sometimes generate information that does not exist or is not accura
210
  OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses.
211
 
212
  ## License
 
 
 
213
 
214
  Our OpenChat 3.5 code and models are distributed under the Apache License 2.0.
215
 
216
- ## Dataset Details
 
 
217
 
218
  OpenChat 3.5 was trained with C-RLFT on a collection of publicly available high-quality instruction data, with a custom processing pipeline. We detail some notable subsets included here:
219
 
220
- - [OpenChat ShareGPT](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset)
221
- - [Open-Orca with FLAN answers](https://huggingface.co/datasets/imone/OpenOrca_FLAN)
222
- - [Feedback-Collection](https://huggingface.co/datasets/kaist-ai/Feedback-Collection)
223
- - Capybara [1](https://huggingface.co/datasets/LDJnr/Pure-Dove) [2](https://huggingface.co/datasets/LDJnr/Verified-Camel) [3](https://huggingface.co/datasets/LDJnr/LessWrong-Amplify-Instruct)
224
- - [GOAT](https://huggingface.co/datasets/tiedong/goat)
225
- - [Glaive](https://huggingface.co/datasets/glaiveai/glaive-code-assistant)
226
- - [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
227
- - [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
228
- - [OpenAssistant](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25)
229
-
230
- ## Citation
231
 
 
 
 
232
  ```
233
  @article{wang2023openchat,
234
  title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data},
@@ -238,7 +283,9 @@ OpenChat 3.5 was trained with C-RLFT on a collection of publicly available high-
238
  }
239
  ```
240
 
241
- ## Acknowledgements
 
 
242
 
243
  We extend our heartfelt gratitude to AutoMeta and caesus from Alignment Lab AI, LDJ and Teknium from Nous Research, alpin and TearGosling from Pygmalion AI for their substantial contributions to data collection and model training.
244
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
 
2
  <div align="center">
3
  <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%">
4
+ <h1>Advancing Open-source Language Models with Mixed-Quality Data</h1>
5
  </div>
6
 
7
+ <p align="center" style="margin-top: 0px;">
8
+ <a href="https://openchat.team">
9
+ <img src="https://github.com/alpayariyak/openchat/blob/master/assets/logo_nobg.png?raw=true" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
10
+ <span class="link-text" style=" margin-right: 5px;">OpenChat Online Demo</span>
11
+ </a> |
12
+ <a href="https://github.com/imoneoi/openchat">
13
+ <img src="https://camo.githubusercontent.com/4133dc1cd4511d4a292b84ce10e52e4ed92569fb2a8165381c9c47be5edc2796/68747470733a2f2f6564656e742e6769746875622e696f2f537570657254696e7949636f6e732f696d616765732f706e672f6769746875622e706e67" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
14
+ <span class="link-text" style=" margin-right: 5px;">GitHub</span>
15
+ </a> |
16
+ <a href="https://arxiv.org/pdf/2309.11235.pdf">
17
+ <img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
18
+ <span class="link-text" style="margin-right: 5px;">Paper</span>
19
+ </a> |
20
+ <a href="https://discord.gg/pQjnXvNKHY">
21
+ <img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
22
+ <span class="link-text">Discord</span>
23
+ </a>
24
  </p>
25
 
26
+ <hr>
27
+ <div style="background-color: white; padding: 0.7em; border-radius: 0.5em; color: black; display: flex; flex-direction: column; justify-content: center; text-align: center;">
28
+ <a href="https://huggingface.co/openchat/openchat_3.5" style="text-decoration: none; color: black;">
29
+ <span style="font-size: 0.7em; font-family: 'Helvetica'; color: white; vertical-align: top; background-color:white; border-radius: 6em; padding: 0.04em 0.4em; letter-spacing: 0.1em; font-weight: bold">3.51210</span>
30
+ <span style="font-size: 1.7em; font-family: 'Helvetica'; letter-spacing: 0.1em; font-weight: bold; color: black;">OPENCHAT</span><span style="font-size: 1.8em; font-family: 'Helvetica'; color: #3c72db; ">3.5</span>
31
+ <span style="font-size: 0.7em; font-family: 'Helvetica'; color: white; vertical-align: top; background-color:red; border-radius: 6em; padding: 0.066em 0.4em; letter-spacing: 0.1em; font-weight: bold;">1210</span>
32
+ <span style="font-size: 1em; font-family: 'Helvetica'; color: black;">
33
+ <br> 🤖 Outperforms <span style="font-weight: bold;">ChatGPT</span> (March) and <span style="font-weight: bold;">Grok-1</span> on most benchmarks 🤖
34
+ <br> 🚀<span style="font-size: 1em; font-family: 'Helvetica'; color: black; font-weight: bold;">15</span>-point improvement in Coding Performance over <span style="font-size: 0.9em;
35
+ font-family: 'Helvetica'; color: black; font-weight: bold;">OpenChat-3.5🚀</span>
36
+ <br><span style="font-size: 1em; font-family: 'Helvetica'; color: #3c72db; font-weight: bold;">New Features</span>
37
+ <br> 💡 2 Modes: Coding + Generalist, Mathematical Reasoning 💡
38
+ <br> 🧑‍⚖️ Experimental support for Evaluator and Feedback capabilities 🧑‍⚖️
39
+ </span>
40
+ </a>
41
+ </div>
42
 
43
+ <!-- <a href="https://huggingface.co/openchat/openchat_3.5">
44
+ <button class="common-button">Model Repo</button>
45
+ </a>
46
+ <a href="https://openchat.team">
47
+ <button class="common-button">OpenChatUI Demo</button>
48
+ </a>
49
+ <a href="https://huggingface.co/spaces/openchat/openchat_3.5">
50
+ <button class="common-button">HuggingFace Space</button>
51
+ </a>
52
+ <a href="https://arxiv.org/pdf/2309.11235.pdf">
53
+ <button class="common-button">Paper</button>
54
+ </a>
55
+ -->
56
+ </p>
57
 
58
  <div style="display: flex; justify-content: center; align-items: center">
59
+ <img src="https://github.com/alpayariyak/openchat/blob/master/assets/1210bench.png?raw=true" style="width: 100%; border-radius: 1em">">
 
60
  </div>
61
 
62
+ <div>
63
+ <h3> Table of Contents</h3>
64
+ </div>
65
 
 
66
 
67
+ 1. [Usage](#usage)
68
+ 2. [Benchmarks](#benchmarks)
69
+ 3. [Limitations](#limitations)
70
+ 4. [License](#license)
71
+ 5. [Dataset Details](#dataset-details)
72
+ 6. [Citation](#citation)
73
+ 7. [Acknowledgements](#acknowledgements)
74
+
75
+
76
+ <div align="center">
77
+ <h2> Usage </h2>
78
+ </div>
79
 
80
  To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command.
81
 
 
131
 
132
  ⚠️ **Notice:** Remember to set `<|end_of_turn|>` as end of generation token.
133
 
134
+ The default (GPT4 Correct) template is also available as the integrated `tokenizer.chat_template`,
135
  which can be used instead of manually specifying the template:
136
 
137
  ```python
 
144
  assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
145
  ```
146
 
147
+ <div align="center">
148
+ <h2> (Experimental) Evaluator / Feedback Capabilities </h2>
149
+ </div>
150
  We've included evaluator capabilities in this release to advance open-source models as evaluators. You can use `Default Mode (GPT4 Correct)` with the following prompt (same as [Prometheus](https://huggingface.co/datasets/kaist-ai/Feedback-Collection)) to evaluate a response.
151
 
152
  ```
 
176
 
177
  ###Feedback:
178
  ```
179
+ <div align="center">
180
+ <h2> Benchmarks </h2>
181
+ </div>
 
 
 
 
 
 
 
 
 
 
182
 
183
  | Model | # Params | Average | MT-Bench | HumanEval | BBH MC | AGIEval | TruthfulQA | MMLU | GSM8K | BBH CoT |
184
  |--------------------|----------|----------|--------------|-----------------|----------|----------|---------------|--------------|--------------|-------------|
 
190
  | OpenOrca Mistral | 7B | 52.7 | 6.86 | 38.4 | 49.4 | 42.9 | 45.9 | 59.3 | 59.1 | 58.1 |
191
  | Zephyr-β^ | 7B | 34.6 | 7.34 | 22.0 | 40.6 | 39.0 | 40.8 | 39.8 | 5.1 | 16.0 |
192
  | Mistral | 7B | - | 6.84 | 30.5 | 39.0 | 38.0 | - | 60.1 | 52.2 | - |
193
+ <details>
194
+ <summary>Evaluation Details(click to expand)</summary>
 
195
  *: ChatGPT (March) results are from [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), and our evaluation. Please note that ChatGPT is not a fixed baseline and evolves rapidly over time.
196
 
197
  ^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data.
 
199
  **: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories.
200
 
201
  All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in [our repository](https://github.com/imoneoi/openchat/#benchmarks).
202
+ </details>
203
+ <div>
204
+ <h3>HumanEval+</h3>
205
+ </div>
206
 
207
+
208
+ | Model | Size | HumanEval+ pass@1 |
209
+ |-----------------------------|----------|------------|
210
+ | ChatGPT (December 12, 2023) | - | 64.6 |
211
+ | WizardCoder-Python-34B-V1.0 | 34B | 64.6 |
212
+ | **OpenChat 3.5 (Dec 10)** | **7B** | **63.4** |
213
+ | OpenHermes 2.5 | 7B | 41.5 |
214
+
215
+ <div>
216
+ <h3>OpenChat-3.5-1210 vs. Grok</h3>
217
+ </div>
218
+
219
+ | | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k |
220
+ |-------------------|-------------|---------|----------|------|-----------|----------|----------|
221
+ | OpenChat 3.5 1210 | Apache-2.0 | **7B** | **60.1** | 65.3 | **68.9** | **28.9** | **77.3** |
222
+ | OpenChat 3.5 | Apache-2.0 | **7B** | 56.4 | 64.3 | 55.5 | 28.6 | **77.3** |
223
+ | Grok-0 | Proprietary | 33B | 44.5 | 65.7 | 39.7 | 15.7 | 56.8 |
224
+ | Grok-1 | Proprietary | ???B | 55.8 | 73 | 63.2 | 23.9 | 62.9 |
225
+
226
+ *: Grok results are reported by [X.AI](https://x.ai/).
227
+
228
+
229
+
230
+
231
+
232
+
233
+
234
+ <div align="center">
235
+ <h2> Limitations </h2>
236
+ </div>
237
 
238
  **Foundation Model Limitations**
239
  Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as:
240
 
241
+ - Complex reasoning
242
+ - Mathematical and arithmetic tasks
243
+ - Programming and coding challenges
244
 
245
  **Hallucination of Non-existent Information**
246
  OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
 
249
  OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses.
250
 
251
  ## License
252
+ <div align="center">
253
+ <h2> License </h2>
254
+ </div>
255
 
256
  Our OpenChat 3.5 code and models are distributed under the Apache License 2.0.
257
 
258
+ <div align="center">
259
+ <h2> Dataset Details </h2>
260
+ </div>
261
 
262
  OpenChat 3.5 was trained with C-RLFT on a collection of publicly available high-quality instruction data, with a custom processing pipeline. We detail some notable subsets included here:
263
 
264
+ - [OpenChat ShareGPT](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset)
265
+ - [Open-Orca with FLAN answers](https://huggingface.co/datasets/imone/OpenOrca_FLAN)
266
+ - [Feedback-Collection](https://huggingface.co/datasets/kaist-ai/Feedback-Collection)
267
+ - Capybara [1](https://huggingface.co/datasets/LDJnr/Pure-Dove) [2](https://huggingface.co/datasets/LDJnr/Verified-Camel) [3](https://huggingface.co/datasets/LDJnr/LessWrong-Amplify-Instruct)
268
+ - [GOAT](https://huggingface.co/datasets/tiedong/goat)
269
+ - [Glaive](https://huggingface.co/datasets/glaiveai/glaive-code-assistant)
270
+ - [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
271
+ - [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
272
+ - [OpenAssistant](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25)
 
 
273
 
274
+ <div align="center">
275
+ <h2> Citation </h2>
276
+ </div>
277
  ```
278
  @article{wang2023openchat,
279
  title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data},
 
283
  }
284
  ```
285
 
286
+ <div align="center">
287
+ <h2> Acknowledgments </h2>
288
+ </div>
289
 
290
  We extend our heartfelt gratitude to AutoMeta and caesus from Alignment Lab AI, LDJ and Teknium from Nous Research, alpin and TearGosling from Pygmalion AI for their substantial contributions to data collection and model training.
291