Daniel O'Connell commited on
Commit
5bbd8e9
1 Parent(s): c3c2492

add loader script

Browse files
Files changed (2) hide show
  1. README.md +36 -2
  2. alignment-research-dataset.py +315 -0
README.md CHANGED
@@ -17,7 +17,7 @@ It is currently maintained and kept up-to-date by volunteers at StampyAI / AI Sa
17
 
18
  ## Sources
19
 
20
- The important thing here is that not all of the dataset entries contain all the same keys.
21
 
22
  They all have the keys: id, source, title, text, and url
23
 
@@ -55,6 +55,41 @@ Other keys are available depending on the source document.
55
 
56
  2. `alignment_text`: This is label specific to the arXiv papers. We added papers to the dataset using Allen AI's SPECTER model and included all the papers that got a confidence score of over 75%. However, since we could not verify with certainty that those papers where about alignment, we've decided to create the `alignment_text` key with the value `"pos"` when we manually labeled it as an alignment text and `"unlabeled"` when we have not labeled it yet. Additionally, we've only included the `text` for the `"pos"` entries, not the `"unlabeled"` entries.
57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  ## Contributing
59
 
60
  Join us at [StampyAI](https://coda.io/d/AI-Safety-Info_dfau7sl2hmG/Get-involved_susRF#_lufSr).
@@ -64,4 +99,3 @@ Join us at [StampyAI](https://coda.io/d/AI-Safety-Info_dfau7sl2hmG/Get-involved_
64
  Please use the following citation when using our dataset:
65
 
66
  Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022).
67
-
 
17
 
18
  ## Sources
19
 
20
+ The important thing here is that not all of the dataset entries contain all the same keys.
21
 
22
  They all have the keys: id, source, title, text, and url
23
 
 
55
 
56
  2. `alignment_text`: This is label specific to the arXiv papers. We added papers to the dataset using Allen AI's SPECTER model and included all the papers that got a confidence score of over 75%. However, since we could not verify with certainty that those papers where about alignment, we've decided to create the `alignment_text` key with the value `"pos"` when we manually labeled it as an alignment text and `"unlabeled"` when we have not labeled it yet. Additionally, we've only included the `text` for the `"pos"` entries, not the `"unlabeled"` entries.
57
 
58
+ ## Usage
59
+
60
+ Execute the following code to download and parse the files:
61
+ ```
62
+ from datasets import load_dataset
63
+ data = load_dataset('StampyAI/alignment-research-dataset')
64
+ ```
65
+
66
+ To only get the data for a specific source, pass it in as the second argument, e.g.:
67
+
68
+ ```
69
+ from datasets import load_dataset
70
+ data = load_dataset('StampyAI/alignment-research-dataset', 'lesswrong')
71
+ ```
72
+
73
+ The various sources have different keys - the resulting data object will have all keys that make sense, with `None** as the value of keys that aren't in a given source. For example, assuming there are the following sources with the appropriate features:
74
+
75
+ ##### source1
76
+ + id
77
+ + name
78
+ + description
79
+ + author
80
+
81
+ ##### source2
82
+ + id
83
+ + name
84
+ + url
85
+ + text
86
+
87
+ Then the resulting data object with have 6 columns, i.e. `id`, `name`, `description`, `author`, `url` and `text`, where rows from `source1` will have `None` in the `url` and `text` columns, and the `source2` rows will have `None` in their `description` and `author` columns.
88
+
89
+ ## Limitations and bias
90
+
91
+ LessWrong posts have overweighted content on x-risk doom so beware of training or finetuning generative LLMs on the dataset.
92
+
93
  ## Contributing
94
 
95
  Join us at [StampyAI](https://coda.io/d/AI-Safety-Info_dfau7sl2hmG/Get-involved_susRF#_lufSr).
 
99
  Please use the following citation when using our dataset:
100
 
101
  Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022).
 
alignment-research-dataset.py ADDED
@@ -0,0 +1,315 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from pathlib import Path
3
+
4
+ import datasets
5
+ from datasets import Value, Sequence
6
+
7
+
8
+ _CITATION = '''
9
+ @article{kirchner2022understanding,
10
+ title={Understanding AI Alignment Research: A Systematic Analysis},
11
+ author={Kirchner, Joshua H and Smith, Lauren and Thibodeau, Joseph and McDonnell, Kathleen and Reynolds, Lauren},
12
+ journal={arXiv preprint arXiv:2022.4338861},
13
+ year={2022}
14
+ }
15
+ '''
16
+
17
+ _DESCRIPTION = """A dataset of AI alignment research, collected from various sources."""
18
+
19
+ _HOMEPAGE = "https://github.com/StampyAI/alignment-research-dataset"
20
+
21
+ _LICENSE = ""
22
+
23
+ _VERSION_ = '0.0.0'
24
+
25
+
26
+ def iterate_file(filename):
27
+ with open(filename) as f:
28
+ for l in f:
29
+ try:
30
+ yield json.loads(l)
31
+ except Exception as e:
32
+ print(f'Could not parse: {l}')
33
+
34
+
35
+ ## Feature extractor helpers
36
+ def get_type(value):
37
+ """Recursively get the huggingface type for the provided value."""
38
+ if value is None:
39
+ return None
40
+ if value and isinstance(value, (tuple, list)):
41
+ return features.Sequence(
42
+ get_type(value[0])
43
+ )
44
+ if value and isinstance(value, dict):
45
+ return {k: get_type(v) for k, v in value.items()}
46
+ if isinstance(value, str):
47
+ return Value('string')
48
+ if isinstance(value, int):
49
+ return Value('int32')
50
+ if isinstance(value, float):
51
+ return Value('double')
52
+ if isinstance(value, bool):
53
+ return Value('bool')
54
+ return None
55
+
56
+
57
+ def print_extra_features(files):
58
+ """Go through all the provided files, and get the non default features for the given file.
59
+
60
+ This can be done manually but would be a hassle.
61
+ It's assumed that the files contain a json object on each line.
62
+ """
63
+ ignored_keys = [
64
+ 'comments', # Comments are arbitrarily nested objects, which doesn't play nice with huggingface
65
+ ]
66
+
67
+ per_file = {}
68
+ for filename in sorted(files):
69
+ extra_types = {}
70
+ for item in iterate_file(filename):
71
+ for k, v in item.items():
72
+ if (k not in extra_types or not extra_types[k]) and k not in ignored_keys and k not in DEFAULT_FEATURES:
73
+ extra_types[k] = get_type(v)
74
+ per_file[filename] = extra_types
75
+
76
+ print('DATASOURCES = {')
77
+ for k, features in per_file.items():
78
+ vals = ',\n'.join(f" '{k}': {v}" for k, v in features.items())
79
+ print(f" '{k.stem}': #\n{vals}\n $,".replace('#', '{').replace('$', '}'))
80
+ print('}')
81
+
82
+
83
+ # These keys are present in all files
84
+ DEFAULT_FEATURES = {
85
+ 'id': Value('string'),
86
+ 'source': Value('string'),
87
+ 'title': Value('string'),
88
+ 'text': Value('large_string'),
89
+ 'url': Value('string'),
90
+ 'date_published': Value(dtype='string'),
91
+ }
92
+
93
+ # Per datasource additional features
94
+ DATASOURCES = {
95
+ 'agentmodels': {
96
+ 'source_filetype': Value(dtype='string', id=None),
97
+ 'converted_with': Value(dtype='string', id=None),
98
+ 'book_title': Value(dtype='string', id=None),
99
+ 'authors': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)
100
+ },
101
+ 'aiimpacts.org': {
102
+ 'paged_url': Value(dtype='string', id=None)
103
+ },
104
+ 'aipulse.org': {
105
+ 'paged_url': Value(dtype='string', id=None)
106
+ },
107
+ 'aisafety.camp': {
108
+ 'paged_url': Value(dtype='string', id=None)
109
+ },
110
+ 'alignment_newsletter': {
111
+ 'converted_with': Value(dtype='string', id=None),
112
+ 'source_type': Value(dtype='string', id=None),
113
+ 'venue': Value(dtype='string', id=None),
114
+ 'newsletter_category': Value(dtype='string', id=None),
115
+ 'highlight': Value(dtype='int32', id=None),
116
+ 'newsletter_number': Value(dtype='string', id=None),
117
+ 'summarizer': Value(dtype='string', id=None),
118
+ 'opinion': Value(dtype='string', id=None),
119
+ 'prerequisites': Value(dtype='string', id=None),
120
+ 'read_more': Value(dtype='string', id=None),
121
+ 'authors': Value(dtype='string', id=None)
122
+ },
123
+ 'arbital': {
124
+ 'source_filetype': Value(dtype='string', id=None),
125
+ 'authors': Value(dtype='string', id=None),
126
+ 'alias': Value(dtype='string', id=None)
127
+ },
128
+ 'arxiv_papers': {
129
+ 'authors': Value(dtype='string', id=None),
130
+ 'source_type': Value(dtype='string', id=None),
131
+ 'converted_with': Value(dtype='string', id=None),
132
+ 'data_last_modified': Value(dtype='string', id=None),
133
+ 'abstract': Value(dtype='string', id=None),
134
+ 'author_comment': Value(dtype='string', id=None),
135
+ 'journal_ref': Value(dtype='string', id=None),
136
+ 'doi': Value(dtype='string', id=None),
137
+ 'primary_category': Value(dtype='string', id=None),
138
+ 'categories': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)
139
+ },
140
+ 'audio_transcripts': {
141
+ 'source_filetype': Value(dtype='string', id=None),
142
+ 'converted_with': Value(dtype='string', id=None),
143
+ 'authors': Value(dtype='string', id=None)
144
+ },
145
+ 'carado.moe': {
146
+ 'source_type': Value(dtype='string', id=None),
147
+ 'authors': Value(dtype='string', id=None)
148
+ },
149
+ 'cold.takes': {},
150
+ 'deepmind.blog': {
151
+ 'source_type': Value(dtype='string', id=None)
152
+ },
153
+ 'distill': {
154
+ 'source_type': Value(dtype='string', id=None),
155
+ 'converted_with': Value(dtype='string', id=None),
156
+ 'authors': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
157
+ 'abstract': Value(dtype='string', id=None),
158
+ 'journal_ref': Value(dtype='string', id=None),
159
+ 'doi': Value(dtype='string', id=None),
160
+ 'bibliography_bib': Sequence(feature={'title': Value(dtype='string', id=None)}, length=-1, id=None)
161
+ },
162
+ 'eaforum': {
163
+ 'authors': Value(dtype='string', id=None),
164
+ 'score': Value(dtype='string', id=None),
165
+ 'omega_karma': Value(dtype='string', id=None),
166
+ 'votes': Value(dtype='string', id=None),
167
+ 'tags': Value(dtype='string', id=None)
168
+ },
169
+ 'gdocs': {
170
+ 'source_filetype': Value(dtype='string', id=None),
171
+ 'converted_with': Value(dtype='string', id=None),
172
+ 'authors': Value(dtype='string', id=None),
173
+ 'docx_name': Value(dtype='string', id=None)
174
+ },
175
+ 'gdrive_ebooks': {
176
+ 'source_filetype': Value(dtype='string', id=None),
177
+ 'converted_with': Value(dtype='string', id=None),
178
+ 'chapter_names': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
179
+ 'file_name': Value(dtype='string', id=None)
180
+ },
181
+ 'generative.ink': {},
182
+ 'gwern_blog': {
183
+ 'authors': Value(dtype='string', id=None)
184
+ },
185
+ 'intelligence.org': {
186
+ 'paged_url': Value(dtype='string', id=None)
187
+ },
188
+ 'jsteinhardt.wordpress.com': {
189
+ 'paged_url': Value(dtype='string', id=None)
190
+ },
191
+ 'lesswrong': {
192
+ 'authors': Value(dtype='string', id=None),
193
+ 'score': Value(dtype='string', id=None),
194
+ 'omega_karma': Value(dtype='string', id=None),
195
+ 'votes': Value(dtype='string', id=None),
196
+ 'tags': Value(dtype='string', id=None)
197
+ },
198
+ 'markdown.ebooks': {
199
+ 'source_type': Value(dtype='string', id=None),
200
+ 'authors': Value(dtype='string', id=None),
201
+ 'filename': Value(dtype='string', id=None)
202
+ },
203
+ 'nonarxiv_papers': {
204
+ 'source_filetype': Value(dtype='string', id=None),
205
+ 'abstract': Value(dtype='string', id=None),
206
+ 'authors': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
207
+ 'filename': Value(dtype='string', id=None)
208
+ },
209
+ 'qualiacomputing.com': {
210
+ 'paged_url': Value(dtype='string', id=None)
211
+ },
212
+ 'reports': {
213
+ 'source_filetype': Value(dtype='string', id=None),
214
+ 'abstract': Value(dtype='string', id=None),
215
+ 'authors': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
216
+ 'filename': Value(dtype='string', id=None)
217
+ },
218
+ 'stampy': {
219
+ 'source_filetype': Value(dtype='string', id=None),
220
+ 'authors': Value(dtype='string', id=None),
221
+ 'question': Value(dtype='string', id=None),
222
+ 'answer': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
223
+ 'entry': Value(dtype='string', id=None)
224
+ },
225
+ 'vkrakovna.wordpress.com': {
226
+ 'paged_url': Value(dtype='string', id=None)
227
+ },
228
+ 'waitbutwhy': {
229
+ 'source_type': Value(dtype='string', id=None),
230
+ 'authors': Value(dtype='string', id=None)
231
+ },
232
+ 'www.yudkowsky.net': {
233
+ 'paged_url': Value(dtype='string', id=None)
234
+ },
235
+ }
236
+
237
+
238
+ def join_features(features, to_join):
239
+ """Recursively join the provided dicts.
240
+
241
+ `to_join` can either be a dict to be merged, or a list of dicts to merge.
242
+ """
243
+ if not to_join:
244
+ return datasets.Features(features)
245
+ if isinstance(to_join, dict):
246
+ return datasets.Features(dict(features, **to_join))
247
+ return join_features(dict(features, **to_join[0]), to_join[1:])
248
+
249
+
250
+ class AlignmentResearchDatasetConfig(datasets.BuilderConfig):
251
+ """BuilderConfig for AlignmentResaerchDataset."""
252
+
253
+ def __init__(self, sources, features, **kwargs):
254
+ """BuilderConfig for AlignmentResaerchDataset.
255
+
256
+ :param List[string] sources: the sources which will be used by this config
257
+ """
258
+ super().__init__(version=datasets.Version(_VERSION_), **kwargs)
259
+ self.sources = sources
260
+ self.features = join_features(DEFAULT_FEATURES, features)
261
+
262
+ @property
263
+ def files(self):
264
+ return [f'{source}.jsonl' for source in self.sources]
265
+
266
+
267
+ class AlignmentResaerchDataset(datasets.GeneratorBasedBuilder):
268
+ VERSION = datasets.Version(_VERSION_)
269
+
270
+ BUILDER_CONFIGS = [
271
+ AlignmentResearchDatasetConfig(
272
+ name='all',
273
+ description='All data files',
274
+ sources=list(DATASOURCES.keys()),
275
+ features=list(DATASOURCES.values())
276
+ )
277
+ ] + [
278
+ AlignmentResearchDatasetConfig(name=source, sources=[source], features=features) for source, features in DATASOURCES.items()
279
+ ]
280
+ DEFAULT_CONFIG_NAME = 'all'
281
+
282
+ def _info(self):
283
+ return datasets.DatasetInfo(
284
+ description=_DESCRIPTION,
285
+ features=self.config.features,
286
+ homepage=_HOMEPAGE,
287
+ license=_LICENSE,
288
+ citation=_CITATION,
289
+ )
290
+
291
+ def _split_generators(self, dl_manager):
292
+ return [
293
+ datasets.SplitGenerator(
294
+ name=datasets.Split.TRAIN,
295
+ gen_kwargs={'files': dl_manager.download(self.config.files)}
296
+ )
297
+ ]
298
+
299
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
300
+ def _generate_examples(self, files):
301
+ seen = set()
302
+
303
+ def is_good(item):
304
+ item_id = item and item.get('id')
305
+ if not item_id or item_id in seen:
306
+ return False
307
+ seen.add(item_id)
308
+
309
+ return item['text'] not in [None, '', 'n/a']
310
+
311
+ def prepare_example(item):
312
+ return item['id'], {k: item.get(k) for k in self.config.features}
313
+
314
+ lines = (item for filename in files for item in iterate_file(filename))
315
+ return map(prepare_example, filter(is_good, lines))