parquet-converter commited on
Commit
9033302
1 Parent(s): ae549a1

Update parquet files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +0 -196
  2. askacademia/test.json +0 -0
  3. askacademia/validation.json +0 -0
  4. askanthropology/test.json +0 -0
  5. askanthropology/train.json +0 -0
  6. askanthropology/validation.json +0 -0
  7. askbaking/test.json +0 -0
  8. askbaking/validation.json +0 -0
  9. askcarguys/test.json +0 -0
  10. askcarguys/train.json +0 -0
  11. askcarguys/validation.json +0 -0
  12. askculinary/test.json +0 -0
  13. askculinary/validation.json +0 -0
  14. askdocs/test.json +0 -0
  15. askdocs/train.json +0 -3
  16. askdocs/validation.json +0 -0
  17. askengineers/test.json +0 -0
  18. askengineers/train.json +0 -3
  19. askengineers/validation.json +0 -0
  20. askhistorians/test.json +0 -0
  21. askhistorians/train.json +0 -3
  22. askhistorians/validation.json +0 -0
  23. askhr/test.json +0 -0
  24. askhr/train.json +0 -3
  25. askhr/validation.json +0 -0
  26. askphilosophy/test.json +0 -0
  27. askphilosophy/train.json +0 -3
  28. askphilosophy/validation.json +0 -0
  29. askphysics/test.json +0 -0
  30. askphysics/train.json +0 -3
  31. askphysics/validation.json +0 -0
  32. askscience/test.json +0 -0
  33. askscience/train.json +0 -3
  34. askscience/validation.json +0 -0
  35. asksciencefiction/test.json +0 -0
  36. asksciencefiction/train.json +0 -3
  37. asksciencefiction/validation.json +0 -0
  38. asksocialscience/test.json +0 -0
  39. asksocialscience/train.json +0 -0
  40. asksocialscience/validation.json +0 -0
  41. askvet/test.json +0 -0
  42. askvet/train.json +0 -0
  43. askvet/validation.json +0 -0
  44. changemyview/test.json +0 -0
  45. changemyview/train.json +0 -3
  46. changemyview/validation.json +0 -0
  47. explainlikeimfive/test.json +0 -0
  48. explainlikeimfive/train.json +0 -3
  49. explainlikeimfive/validation.json +0 -0
  50. legaladvice/test.json +0 -0
README.md DELETED
@@ -1,196 +0,0 @@
1
- ---
2
- license: mit
3
- task_categories:
4
- - text-generation
5
- tags:
6
- - human-feedback
7
- - rlhf
8
- - preferences
9
- - reddit
10
- size_categories:
11
- - 100K<n<1M
12
- language:
13
- - en
14
- ---
15
- # 🚢 Stanford Human Preferences Dataset (SHP)
16
-
17
- ## Summary
18
-
19
- SHP is a dataset of **385K aggregate human preferences** over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
20
- It is primarily intended to be used for training reward models for RLHF and automatic evaluation models for NLG.
21
-
22
- Each example is a Reddit post and a pair of top-level comments for that post, where one comment is more preferred by Reddit users (in aggregate).
23
- SHP exploits the fact that if comment A was written *after* comment B but has a higher score nonetheless, then A is definitively more preferred to B.
24
- If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility from being written first.
25
-
26
- How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf)?
27
-
28
- | Dataset | Input | Output | No. Domains | Data Format | Length |
29
- | -------------------- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- | --------------- |
30
- | SHP | Reddit post and comments | Aggregate Human Preference Label + Scores | 18 (cooking, cars, etc.) | Question/Instruction + Response | input up to 10672 T5 tokens |
31
- | Anthropic/HH-RLHF | Dialogue with LLM | Individual Human Preference Label | 2 (harmful, helpful) | Multi-turn Dialogue | input up to 1435 T5 tokens |
32
-
33
-
34
- ## Data Structure
35
-
36
- There are 18 directories, one for each subreddit, and each directory contains a JSONL file for the training, validation, and test data.
37
- Here's how to get the data using Huggingface's `datasets` library:
38
-
39
- ```python
40
- from datasets import load_dataset
41
-
42
- # Load all the data
43
- dataset = load_dataset("stanfordnlp/shp")
44
-
45
- # Load one of the subreddits
46
- dataset = load_dataset("stanfordnlp/shp", data_dir="askculinary")
47
- ```
48
-
49
- Here's an example from `askculinary/train.json`:
50
- ```
51
- {
52
- `post_id`:"qt3nxl",
53
- `domain`:"askculinary_train",
54
- `upvote_ratio`:0.98,
55
- `history`:"What's the best way to disassemble raspberries? Like this, but down to the individual seeds: https:\/\/i.imgur.com\/Z0c6ZKE.jpg I've been pulling them apart with tweezers and it's really time consuming. I have about 10 pounds to get through this weekend.",
56
- `c_root_id_A`:"hkh25sc",
57
- `c_root_id_B`:"hkh25lp",
58
- `created_at_utc_A`:1636822112,
59
- `created_at_utc_B`:1636822110,
60
- `score_A`:340,
61
- `score_B`:166,
62
- `human_ref_A`:"Pectinex, perhaps? It's an enzyme that breaks down cellulose. With citrus, you let it sit in a dilute solution of pectinex overnight to break down the connective tissues. You end up with perfect citrus supremes. If you let the raspberries sit for a shorter time, I wonder if it would separate the seeds the same way...? Here's an example: https:\/\/www.chefsteps.com\/activities\/perfect-citrus-supreme",
63
- `human_ref_B`:"Raspberry juice will make a bright stain at first, but in a matter of weeks it will start to fade away to almost nothing. It is what is known in the natural dye world as a fugitive dye, it will fade even without washing or exposure to light. I hope she gets lots of nice photos of these stains on her dress, because soon that will be all she has left of them!",
64
- `labels`:1,
65
- `seconds_difference`:2.0,
66
- `score_ratio`:2.0481927711
67
- }
68
- ```
69
-
70
- where the fields are:
71
- - ```post_id```: the ID of the Reddit post (string)
72
- - ```domain```: the subreddit and split the example is drawn from, separated by an underscore (string)
73
- - ```upvote_ratio```: the upvote ratio of the Reddit post (float)
74
- - ```history```: the post title concatented to the post body (string)
75
- - ```c_root_id_A```: the ID of comment A (string)
76
- - ```c_root_id_B```: the ID of comment B (string)
77
- - ```created_at_utc_A```: utc timestamp of when comment A was created (integer)
78
- - ```created_at_utc_B```: utc timestamp of when comment B was created (integer)
79
- - ```score_A```: score of comment A (integer)
80
- - ```score_B```: score of comment B (integer)
81
- - ```human_ref_A```: text of comment A (string)
82
- - ```human_ref_B```: text of comment B (string)
83
- - ```labels```: the preference label -- it is 1 if A is preferred to B; 0 if B is preferred to A. This was randomized such that the label distribution is roughly 50/50. (integer)
84
- - ```seconds_difference```: how many seconds after the less preferred comment the more preferred one was created (will always be >= 0) (integer)
85
- - ```score_ratio```: the ratio score_A:score B (will be >= 1) (float)
86
-
87
-
88
- ## Dataset Design
89
-
90
- The data is sourced from Reddit, which is a public forum organized into topic-specific fora called *subreddits*.
91
- For example, the `askculinary` subreddit is where users ask cooking-related questions and are answered by other users.
92
- The score of a post/comment is the number of upvotes it gets from users, minus the number of downvotes it gets.
93
- The value of a score is relative; in subreddits(posts) with more traffic, there will be more higher-scoring posts(comments).
94
- Within a post, comments posted earlier will tend to have a higher score simply due to having more exposure.
95
-
96
-
97
- ### Subreddit Selection
98
-
99
- SHP contains a train, validation, and test split for comments scraped from 18 different subreddits. We chose subreddits based on:
100
- 1. whether they were well-known (subscriber count >= 50K)
101
- 2. whether posts were expected to pose a question or instruction that the top-level comments were meant to answer
102
- 3. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., `askscience` vs. `AskAmericans`)
103
-
104
- The train/validation/test splits were created by splitting the post IDs of a subreddit in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits.
105
- Since different posts have different numbers of comments, the number of preferences in each split is not exactly 90%/5%/5%:
106
-
107
- | subreddit | train | validation | test | total |
108
- | ------------------ | -------: | ---------: | ---: | ----: |
109
- | askacademia | 31450 | 2095 | 1708 | 35253 |
110
- | askanthropology | 3910 | 203 | 268 | 4381 |
111
- | askbaking | 44007 | 2096 | 1544 | 47647 |
112
- | askcarguys | 3227 | 159 | 117 | 3503 |
113
- | askculinary | 45710 | 2094 | 2563 | 50367 |
114
- | askdocs | 6449 | 315 | 455 | 7219 |
115
- | askengineers | 57096 | 3154 | 2638 | 62888 |
116
- | askhistorians | 3264 | 113 | 164 | 3541 |
117
- | askhr | 8295 | 641 | 395 | 9331 |
118
- | askphilosophy | 10307 | 608 | 677 | 11592 |
119
- | askphysics | 7364 | 409 | 587 | 8360 |
120
- | askscience | 13316 | 899 | 977 | 15192 |
121
- | asksciencefiction | 29382 | 1576 | 1987 | 32945 |
122
- | asksocialscience | 2706 | 147 | 188 | 3041 |
123
- | askvet | 3300 | 170 | 224 | 3694 |
124
- | changemyview | 38173 | 1637 | 1836 | 41646 |
125
- | explainlikeimfive | 19592 | 1014 | 1070 | 21676 |
126
- | legaladvice | 21170 | 1106 | 1011 | 23287 |
127
- | ALL | 348718 | 18436 | 18409 | 385563 |
128
-
129
- ### Post and Comment Selection
130
-
131
- Given a post P and two comments (A,B) we only included the preference A > B in the dataset if
132
- 1. A was written *no later than* B and A has a higher score than B.
133
- 2. The post is a self-post (i.e., a body of text and not a link to another page) made before 2023, was not edited, and is not NSFW (over 18).
134
- 3. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
135
- 4. The post has a score >= 10 and each comment has a score >= 2 (upvoted at least once).
136
-
137
- A post with `n` comments could have up to (`n` choose `2`) preferences in the data.
138
- Since the number of comments per post is Pareto-distributed, to prevent a relatively small number of posts from dominating the data, we limited the scraping to 50 comments per post.
139
- This means that each post could have up to (`50` choose `2`) comments in the dataset, though this is a much smaller number in practice, since all the criteria above need to be met.
140
-
141
- Reddit makes it very difficult to get anything beyond the top 1000 posts for each subreddit.
142
- We started with the top-scoring 1000 posts (of all time) and searched for the 25 most similar posts to each one using Reddit's search function to get up to 7500 unique post IDs per subreddit.
143
-
144
-
145
- ### Preprocessing
146
-
147
- We tried to keep preprocessing to a minimum. Subreddit-specific abbreviations were expanded ("CMV" to "Change my view that").
148
- In hyperlinks, only the referring text was kept and the URL was removed (if the URL was written out, then it was kept).
149
-
150
-
151
- ## Building a Preference Model
152
-
153
- ### Finetuning
154
-
155
- If you want to finetune a model to predict human preferences (e.g., for NLG evaluation or an RLHF reward model), here are some helpful tips:
156
-
157
- 1. **Use a sufficiently large model.** With FLAN-T5-xl, you can get 65-85% accuracies depending on the subreddit.
158
- 2. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
159
- 3. **Preprocess the data.** The total input length should fit under the model's token limit (usually 512 tokens).
160
- Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on inputs over 512 tokens.
161
- To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
162
- If this is still over 512 tokens, simply skip the example.
163
- 4. **Train for fewer epochs.** The [InstructGPT paper](https://arxiv.org/abs/2203.02155) paper suggests training a reward model for only 1 epoch.
164
- Since the same comment appears in multiple preferences, it is easy to overfit to the data.
165
- 5. **Training on less data may help.**
166
- Preferences with a large `score_ratio` (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
167
- The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post.
168
-
169
- ### Evaluating
170
-
171
- Since it is easier to predict strongly-held preferences than weakly-held ones, instead of reporting a single accuracy value, we recommend reporting a performance curve as a function of the `score_ratio`.
172
- For example, here is the accuracy curve for a FLAN-T5-xl model trained on the askculinary data using the suggestions above.
173
- The orange line is from finetuning only on preferences with a 2+ score ratio and using no more than 5 preferences from each post to prevent overfitting:
174
-
175
- ![Graph](curve.png)
176
-
177
- We see that finetuning on less -- but higher quality -- data leads to higher accuracies on test data with a score ratio below 3.5, with no real downsides!
178
-
179
-
180
-
181
-
182
-
183
-
184
- ## Disclaimer
185
-
186
- Although we filtered out posts with NSFW (over 18) content and chose an innocuous set of subreddits, some of the data may contain discriminatory or harmful language.
187
- The data does not reflect the views of the dataset creators.
188
-
189
- Reddit users on these subreddits are also not necessarily representative of the broader population, which one should keep in mind before using any models trained on this data.
190
- As always, remember to evaluate!
191
-
192
-
193
- ## Contact
194
-
195
- Please contact kawin@stanford.edu if you have any questions about the data.
196
- This dataset was created by Kawin Ethayarajh, Heidi (Chenyu) Zhang, Yizhong Wang, and Dan Jurafsky.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
askacademia/test.json DELETED
The diff for this file is too large to render. See raw diff
 
askacademia/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
askanthropology/test.json DELETED
The diff for this file is too large to render. See raw diff
 
askanthropology/train.json DELETED
The diff for this file is too large to render. See raw diff
 
askanthropology/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
askbaking/test.json DELETED
The diff for this file is too large to render. See raw diff
 
askbaking/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
askcarguys/test.json DELETED
The diff for this file is too large to render. See raw diff
 
askcarguys/train.json DELETED
The diff for this file is too large to render. See raw diff
 
askcarguys/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
askculinary/test.json DELETED
The diff for this file is too large to render. See raw diff
 
askculinary/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
askdocs/test.json DELETED
The diff for this file is too large to render. See raw diff
 
askdocs/train.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6f017fb83987ad7feb0a30d8c497342d84f1a1b1af89953f8bba4a0bbf06e614
3
- size 21221391
 
 
 
 
askdocs/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
askengineers/test.json DELETED
The diff for this file is too large to render. See raw diff
 
askengineers/train.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:72ddaa55f6d28994b85126e4932f6025a842824b32bf581cf2c8a801246a2751
3
- size 101518632
 
 
 
 
askengineers/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
askhistorians/test.json DELETED
The diff for this file is too large to render. See raw diff
 
askhistorians/train.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:da06effc67f8d2b2cf52898482db703a4068ec9bf6fcdbf436ae8a9441603532
3
- size 16486253
 
 
 
 
askhistorians/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
askhr/test.json DELETED
The diff for this file is too large to render. See raw diff
 
askhr/train.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:14eb837f88d27952302fe6afcc1e91e3c99178713c6ddb47488e7625abf1f506
3
- size 19348171
 
 
 
 
askhr/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
askphilosophy/test.json DELETED
The diff for this file is too large to render. See raw diff
 
askphilosophy/train.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:15caf42df29870f427b7332dee3a055191f1533933e51baf09313bbba6f8699f
3
- size 23991278
 
 
 
 
askphilosophy/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
askphysics/test.json DELETED
The diff for this file is too large to render. See raw diff
 
askphysics/train.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4b4dd17eebf693b1f761d63de7cfcb628b7cff054a08ce75e1d05ca09cf1c290
3
- size 13017152
 
 
 
 
askphysics/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
askscience/test.json DELETED
The diff for this file is too large to render. See raw diff
 
askscience/train.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:73fb1932f101e62fd152618d60111ac7b362df36f1512d0962ea64dc42d564a6
3
- size 31479455
 
 
 
 
askscience/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
asksciencefiction/test.json DELETED
The diff for this file is too large to render. See raw diff
 
asksciencefiction/train.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:26dd72037f4724759a32fb5ee81cc25349915e9a35fff2e5345e278bdff8dd7c
3
- size 44022822
 
 
 
 
asksciencefiction/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
asksocialscience/test.json DELETED
The diff for this file is too large to render. See raw diff
 
asksocialscience/train.json DELETED
The diff for this file is too large to render. See raw diff
 
asksocialscience/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
askvet/test.json DELETED
The diff for this file is too large to render. See raw diff
 
askvet/train.json DELETED
The diff for this file is too large to render. See raw diff
 
askvet/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
changemyview/test.json DELETED
The diff for this file is too large to render. See raw diff
 
changemyview/train.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8f586f213b000e7189f80ab10901128daaa9004e30bf3f1fef76ffd4b019931d
3
- size 149304901
 
 
 
 
changemyview/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
explainlikeimfive/test.json DELETED
The diff for this file is too large to render. See raw diff
 
explainlikeimfive/train.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5d3337e6d17e94e8bbc5cd622b78c078eaf117035798b8fdb8be2c65d9da5122
3
- size 38224358
 
 
 
 
explainlikeimfive/validation.json DELETED
The diff for this file is too large to render. See raw diff
 
legaladvice/test.json DELETED
The diff for this file is too large to render. See raw diff