RonanMcGovern commited on
Commit
8d97a87
1 Parent(s): c1d6500

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +287 -0
README.md CHANGED
@@ -1,3 +1,290 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ - es
6
+ - ru
7
+ - de
8
+ - pl
9
+ - th
10
+ - vi
11
+ - sv
12
+ - bn
13
+ - da
14
+ - he
15
+ - it
16
+ - fa
17
+ - sk
18
+ - id
19
+ - nb
20
+ - el
21
+ - nl
22
+ - hu
23
+ - eu
24
+ - zh
25
+ - eo
26
+ - ja
27
+ - ca
28
+ - cs
29
+ - bg
30
+ - fi
31
+ - pt
32
+ - tr
33
+ - ro
34
+ - ar
35
+ - uk
36
+ - gl
37
+ - fr
38
+ - ko
39
+ tags:
40
+ - human-feedback
41
+ - llama-2
42
+ size_categories:
43
+ - 1K<n<10k
44
+ pretty_name: Filtered OpenAssistant Conversations
45
  ---
46
+ # Chat Fine-tuning Dataset - OpenAssistant Falcon
47
+ This dataset allows for fine-tuning chat models using '\Human:' AND '\nAssistant:' to wrap user messages.
48
+
49
+ It still uses <|endoftext|> as EOS and BOS token, as per Falcon.
50
+
51
+ Sample
52
+
53
+ Preparation:
54
+
55
+ 1. The dataset is cloned from [TimDettmers](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), which itself is a subset of the Open Assistant dataset, which you can find [here](https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main). This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
56
+ 1. The dataset was then filtered to:
57
+ - replace instances of '### Human:' with '\nHuman:'
58
+ - replace instances of '### Assistant:' with '\nAssistant:'
59
+ - end assistant responses with <|endoftext|> (to encourage the model to emit <|endoftext|> when finished a response).
60
+
61
+ Details of the root dataset follow, copied from that repo:
62
+
63
+ # OpenAssistant Conversations Dataset (OASST1)
64
+
65
+ ## Dataset Description
66
+
67
+ - **Homepage:** https://www.open-assistant.io/
68
+ - **Repository:** https://github.com/LAION-AI/Open-Assistant
69
+ - **Paper:** https://arxiv.org/abs/2304.07327
70
+
71
+ ### Dataset Summary
72
+
73
+ In an effort to democratize research on large-scale alignment, we release OpenAssistant
74
+ Conversations (OASST1), a human-generated, human-annotated assistant-style conversation
75
+ corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292
76
+ quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus
77
+ is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
78
+
79
+ Please refer to our [paper](https://arxiv.org/abs/2304.07327) for further details.
80
+
81
+ ### Dataset Structure
82
+
83
+ This dataset contains message trees. Each message tree has an initial prompt message as the root node,
84
+ which can have multiple child messages as replies, and these child messages can have multiple replies.
85
+
86
+ All messages have a role property: this can either be "assistant" or "prompter". The roles in
87
+ conversation threads from prompt to leaf node strictly alternate between "prompter" and "assistant".
88
+
89
+ This version of the dataset contains data collected on the [open-assistant.io](https://open-assistant.io/) website until April 12 2023.
90
+
91
+ ### JSON Example: Message
92
+
93
+ For readability, the following JSON examples are shown formatted with indentation on multiple lines.
94
+ Objects are stored without indentation (on single lines) in the actual jsonl files.
95
+
96
+ ```json
97
+ {
98
+ "message_id": "218440fd-5317-4355-91dc-d001416df62b",
99
+ "parent_id": "13592dfb-a6f9-4748-a92c-32b34e239bb4",
100
+ "user_id": "8e95461f-5e94-4d8b-a2fb-d4717ce973e4",
101
+ "text": "It was the winter of 2035, and artificial intelligence (..)",
102
+ "role": "assistant",
103
+ "lang": "en",
104
+ "review_count": 3,
105
+ "review_result": true,
106
+ "deleted": false,
107
+ "rank": 0,
108
+ "synthetic": true,
109
+ "model_name": "oasst-sft-0_3000,max_new_tokens=400 (..)",
110
+ "labels": {
111
+ "spam": { "value": 0.0, "count": 3 },
112
+ "lang_mismatch": { "value": 0.0, "count": 3 },
113
+ "pii": { "value": 0.0, "count": 3 },
114
+ "not_appropriate": { "value": 0.0, "count": 3 },
115
+ "hate_speech": { "value": 0.0, "count": 3 },
116
+ "sexual_content": { "value": 0.0, "count": 3 },
117
+ "quality": { "value": 0.416, "count": 3 },
118
+ "toxicity": { "value": 0.16, "count": 3 },
119
+ "humor": { "value": 0.0, "count": 3 },
120
+ "creativity": { "value": 0.33, "count": 3 },
121
+ "violence": { "value": 0.16, "count": 3 }
122
+ }
123
+ }
124
+ ```
125
+
126
+ ### JSON Example: Conversation Tree
127
+
128
+ For readability, only a subset of the message properties is shown here.
129
+
130
+ ```json
131
+ {
132
+ "message_tree_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
133
+ "tree_state": "ready_for_export",
134
+ "prompt": {
135
+ "message_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
136
+ "text": "Why can't we divide by 0? (..)",
137
+ "role": "prompter",
138
+ "lang": "en",
139
+ "replies": [
140
+ {
141
+ "message_id": "894d30b6-56b4-4605-a504-89dd15d4d1c8",
142
+ "text": "The reason we cannot divide by zero is because (..)",
143
+ "role": "assistant",
144
+ "lang": "en",
145
+ "replies": [
146
+ // ...
147
+ ]
148
+ },
149
+ {
150
+ "message_id": "84d0913b-0fd9-4508-8ef5-205626a7039d",
151
+ "text": "The reason that the result of a division by zero is (..)",
152
+ "role": "assistant",
153
+ "lang": "en",
154
+ "replies": [
155
+ {
156
+ "message_id": "3352725e-f424-4e3b-a627-b6db831bdbaa",
157
+ "text": "Math is confusing. Like those weird Irrational (..)",
158
+ "role": "prompter",
159
+ "lang": "en",
160
+ "replies": [
161
+ {
162
+ "message_id": "f46207ca-3149-46e9-a466-9163d4ce499c",
163
+ "text": "Irrational numbers are simply numbers (..)",
164
+ "role": "assistant",
165
+ "lang": "en",
166
+ "replies": []
167
+ },
168
+ // ...
169
+ ]
170
+ }
171
+ ]
172
+ }
173
+ ]
174
+ }
175
+ }
176
+ ```
177
+
178
+ Please refer to [oasst-data](https://github.com/LAION-AI/Open-Assistant/tree/main/oasst-data) for
179
+ details about the data structure and Python code to read and write jsonl files containing oasst data objects.
180
+
181
+ If you would like to explore the dataset yourself you can find a
182
+ [`getting-started`](https://github.com/LAION-AI/Open-Assistant/blob/main/notebooks/openassistant-oasst1/getting-started.ipynb)
183
+ notebook in the `notebooks/openassistant-oasst1` folder of the [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
184
+ github repository.
185
+
186
+
187
+ ## Main Dataset Files
188
+
189
+ Conversation data is provided either as nested messages in trees (extension `.trees.jsonl.gz`)
190
+ or as a flat list (table) of messages (extension `.messages.jsonl.gz`).
191
+
192
+ ### Ready For Export Trees
193
+
194
+ ```
195
+ 2023-04-12_oasst_ready.trees.jsonl.gz 10,364 trees with 88,838 total messages
196
+ 2023-04-12_oasst_ready.messages.jsonl.gz 88,838 messages
197
+ ```
198
+ Trees in `ready_for_export` state without spam and deleted messages including message labels.
199
+ The oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.
200
+ ### All Trees
201
+ ```
202
+ 2023-04-12_oasst_all.trees.jsonl.gz 66,497 trees with 161,443 total messages
203
+ 2023-04-12_oasst_all.messages.jsonl.gz 161,443 messages
204
+ ```
205
+ All trees, including those in states `prompt_lottery_waiting` (trees that consist of only one message, namely the initial prompt),
206
+ `aborted_low_grade` (trees that stopped growing because the messages had low quality), and `halted_by_moderator`.
207
+ ### Supplemental Exports: Spam & Prompts
208
+ ```
209
+ 2023-04-12_oasst_spam.messages.jsonl.gz
210
+ ```
211
+ These are messages which were deleted or have a negative review result (`"review_result": false`).
212
+ Besides low quality, a frequent reason for message deletion is a wrong language tag.
213
+
214
+ ```
215
+ 2023-04-12_oasst_prompts.messages.jsonl.gz
216
+ ```
217
+ These are all the kept initial prompt messages with positive review result (no spam) of trees in `ready_for_export` or `prompt_lottery_waiting` state.
218
+
219
+ ### Using the Huggingface Datasets
220
+
221
+ While HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.
222
+ Nevertheless, we make all messages which can also be found in the file `2023-04-12_oasst_ready.trees.jsonl.gz` available in parquet as train/validation splits.
223
+ These are directly loadable by [Huggingface Datasets](https://pypi.org/project/datasets/).
224
+
225
+ To load the oasst1 train & validation splits use:
226
+
227
+ ```python
228
+ from datasets import load_dataset
229
+ ds = load_dataset("OpenAssistant/oasst1")
230
+ train = ds['train'] # len(train)=84437 (95%)
231
+ val = ds['validation'] # len(val)=4401 (5%)
232
+ ```
233
+
234
+ The messages appear in depth-first order of the message trees.
235
+
236
+ Full conversation trees can be reconstructed from the flat messages table by using the `parent_id`
237
+ and `message_id` properties to identify the parent-child relationship of messages. The `message_tree_id`
238
+ and `tree_state` properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state.
239
+
240
+ ### Languages
241
+
242
+ OpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows:
243
+
244
+ **Languages with over 1000 messages**
245
+ - English: 71956
246
+ - Spanish: 43061
247
+ - Russian: 9089
248
+ - German: 5279
249
+ - Chinese: 4962
250
+ - French: 4251
251
+ - Thai: 3042
252
+ - Portuguese (Brazil): 2969
253
+ - Catalan: 2260
254
+ - Korean: 1553
255
+ - Ukrainian: 1352
256
+ - Italian: 1320
257
+ - Japanese: 1018
258
+
259
+ <details>
260
+ <summary><b>Languages with under 1000 messages</b></summary>
261
+ <ul>
262
+ <li>Vietnamese: 952</li>
263
+ <li>Basque: 947</li>
264
+ <li>Polish: 886</li>
265
+ <li>Hungarian: 811</li>
266
+ <li>Arabic: 666</li>
267
+ <li>Dutch: 628</li>
268
+ <li>Swedish: 512</li>
269
+ <li>Turkish: 454</li>
270
+ <li>Finnish: 386</li>
271
+ <li>Czech: 372</li>
272
+ <li>Danish: 358</li>
273
+ <li>Galician: 339</li>
274
+ <li>Hebrew: 255</li>
275
+ <li>Romanian: 200</li>
276
+ <li>Norwegian Bokmål: 133</li>
277
+ <li>Indonesian: 115</li>
278
+ <li>Bulgarian: 95</li>
279
+ <li>Bengali: 82</li>
280
+ <li>Persian: 72</li>
281
+ <li>Greek: 66</li>
282
+ <li>Esperanto: 59</li>
283
+ <li>Slovak: 19</li>
284
+ </ul>
285
+ </details>
286
+ ## Contact
287
+
288
+ - Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord)
289
+ - GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
290
+ - E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai)