salavina commited on
Commit
f3fb7ba
1 Parent(s): b73df3c

Add OCW dataset

Browse files
Files changed (5) hide show
  1. OCW.json +0 -0
  2. README.md +321 -3
  3. test.json +0 -0
  4. train.json +0 -0
  5. validation.json +0 -0
OCW.json ADDED
The diff for this file is too large to render. See raw diff
 
README.md CHANGED
@@ -1,3 +1,321 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🧩 Only Connect Wall (OCW) Dataset
2
+
3
+ The Only Connect Wall (OCW) dataset contains 618 _"Connecting Walls"_ from the [Round 3: Connecting Wall](https://en.wikipedia.org/wiki/Only_Connect#Round_3:_Connecting_Wall) segment of the [Only Connect quiz show](https://en.wikipedia.org/wiki/Only_Connect), collected from 15 seasons' worth of episodes. Each wall contains the ground-truth __groups__ and __connections__ as well as recorded human performance. Please see [our paper](https://arxiv.org/abs/2306.11167) for more details about the dataset and its motivations.
4
+
5
+ ## 📋 Table of Contents
6
+
7
+ - [🧩 Only Connect Wall (OCW) Dataset](#-only-connect-wall-ocw-dataset)
8
+ - [📋 Table of Contents](#-table-of-contents)
9
+ - [📖 Usage](#-usage)
10
+ - [Downloading the dataset](#downloading-the-dataset)
11
+ - [Dataset structure](#dataset-structure)
12
+ - [Loading the dataset](#loading-the-dataset)
13
+ - [Evaluating](#evaluating)
14
+ - [Downloading easy datasets for ablation studies](#downloading-easy-datasets-for-ablation-studies)
15
+ - [Running the baselines](#running-the-baselines)
16
+ - [Word Embeddings and Pre-trained Language Models](#word-embeddings-and-pre-trained-language-models)
17
+ - [Large Language Models](#large-language-models)
18
+ - [✍️ Contributing](#️-contributing)
19
+ - [📝 Citing](#-citing)
20
+ - [🙏 Acknowledgements](#-acknowledgements)
21
+
22
+ ## 📖 Usage
23
+
24
+ ### Downloading the dataset
25
+
26
+ The dataset can be downloaded from [here](https://www.cs.toronto.edu/~taati/OCW/OCW.tar.gz) or with a bash script:
27
+
28
+ ```bash
29
+ bash download_OCW.sh
30
+ ```
31
+
32
+ ### Dataset structure
33
+
34
+ The dataset is provided as JSON files, one for each partition: `train.json`, `validation.json` and `test.json`. We also provide a `OCW.json` file that contains all examples across all splits. The splits are sized as follows:
35
+
36
+ | Split | # Walls |
37
+ |:-------|:---------:|
38
+ | `train` | 62 |
39
+ | `validation` | 62 |
40
+ | `test` | 494 |
41
+
42
+ Here is an example of the dataset's structure:
43
+
44
+ ```json
45
+ {
46
+ "season_to_walls_map": {
47
+ "1": {
48
+ "num_walls": 30,
49
+ "start_date": "15/09/2008",
50
+ "end_date": "22/12/2008"
51
+ }
52
+ },
53
+ "dataset": [
54
+ {
55
+ "wall_id": "882c",
56
+ "season": 1,
57
+ "episode": 5,
58
+ "words": [
59
+ "Puzzle",
60
+ "Manhattan",
61
+ "B",
62
+ "Wrench",
63
+ "Smith",
64
+ "Nuts",
65
+ "Brooks",
66
+ "Blanc",
67
+ "Suit",
68
+ "Screwdriver",
69
+ "Sidecar",
70
+ "Margarita",
71
+ "Hammer",
72
+ "Business",
73
+ "Gimlet",
74
+ "Gibson"
75
+ ],
76
+ "gt_connections": [
77
+ "Famous Mels",
78
+ "Household tools",
79
+ "Cocktails",
80
+ "Monkey ___"
81
+ ],
82
+ "groups": {
83
+ "group_1": {
84
+ "group_id": "882c_01",
85
+ "gt_words": [
86
+ "Blanc",
87
+ "Brooks",
88
+ "B",
89
+ "Smith"
90
+ ],
91
+ "gt_connection": "Famous Mels",
92
+ "human_performance": {
93
+ "grouping": 1,
94
+ "connection": 1
95
+ }
96
+ },
97
+ "group_2": {
98
+ "group_id": "882c_02",
99
+ "gt_words": [
100
+ "Screwdriver",
101
+ "Hammer",
102
+ "Gimlet",
103
+ "Wrench"
104
+ ],
105
+ "gt_connection": "Household tools",
106
+ "human_performance": {
107
+ "grouping": 1,
108
+ "connection": 1
109
+ }
110
+ },
111
+ "group_3": {
112
+ "group_id": "882c_03",
113
+ "gt_words": [
114
+ "Sidecar",
115
+ "Manhattan",
116
+ "Gibson",
117
+ "Margarita"
118
+ ],
119
+ "gt_connection": "Cocktails",
120
+ "human_performance": {
121
+ "grouping": 1,
122
+ "connection": 1
123
+ }
124
+ },
125
+ "group_4": {
126
+ "group_id": "882c_04",
127
+ "gt_words": [
128
+ "Puzzle",
129
+ "Business",
130
+ "Nuts",
131
+ "Suit"
132
+ ],
133
+ "gt_connection": "Monkey ___",
134
+ "human_performance": {
135
+ "grouping": 1,
136
+ "connection": 1
137
+ }
138
+ }
139
+ },
140
+ "overall_human_performance": {
141
+ "grouping": [
142
+ 1,
143
+ 1,
144
+ 1,
145
+ 1
146
+ ],
147
+ "connections": [
148
+ 1,
149
+ 1,
150
+ 1,
151
+ 1
152
+ ]
153
+ }
154
+ }
155
+ ]
156
+ }
157
+ ```
158
+
159
+ where
160
+
161
+ - `"season_to_walls_map"` contains the `"num_walls"` in each season, as well as the `"start_date"` and `"end_date"` the season ran
162
+ - `"dataset"` is a list of dictionaries, where each dictionary contains all accompanying information about a wall:
163
+ - `"wall_id"`: a unique string identifier for the wall
164
+ - `"season"`: an integer representing the season the wall was collected from
165
+ - `"episode"`: an integer representing the episode the wall was collected from
166
+ - `"words"`: a list of strings representing the words in the wall in random order
167
+ - `"gt_connections"`: a list of strings representing the ground truth connections of each group
168
+ - `"groups`: a dictionary of dictionaries containing the four groups in the wall, each has the following items:
169
+ - `"group_id"`: a unique string identifier for the group
170
+ - `"gt_words"`: a list of strings representing the ground truth words in the group
171
+ - `"gt_connection"`: a string representing the ground truth connection of the group
172
+ - `"human_performance`: a dictionary containing recorded human performance for the grouping and connections tasks
173
+ - `"overall_human_performance"`: a dictionary containing recorded human performance for the grouping and connections tasks for each group in the wall
174
+
175
+ ### Loading the dataset
176
+
177
+ The three partitions can be loaded the same way as any other JSON file. For example, using Python:
178
+
179
+ ```python
180
+ dataset = {
181
+ "train": json.load(open("./dataset/train.json", "r"))["dataset"],
182
+ "validation": json.load(open("./dataset/validation.json", "r"))["dataset"],
183
+ "test": json.load(open("./dataset/test.json", "r"))["dataset"],
184
+ }
185
+ ```
186
+
187
+ However, it is likely easiest to work with the dataset using the HuggingFace Datasets library:
188
+
189
+ ```python
190
+ # pip install datasets
191
+ dataset = load_dataset(
192
+ "json",
193
+ data_files={
194
+ "train": "./dataset/train.json",
195
+ "validation": "./dataset/validation.json",
196
+ "test": "./dataset/test.json",
197
+ },
198
+ field="dataset",
199
+ )
200
+ ```
201
+
202
+ ### Evaluating
203
+
204
+ We provide a script for evaluating the performance of a model on the dataset. Before running, make sure you have installed the requirements and package:
205
+
206
+ ```bash
207
+ pip install -r requirements.txt
208
+ pip install -e .
209
+ ```
210
+
211
+ Then, ensure your model predictions are formatted as follows in a JSON file:
212
+
213
+ ```json
214
+ [{
215
+ "wall_id": "882c_01",
216
+ "predicted_groups": [
217
+ ["Puzzle", "Manhattan", "B", "Wrench"],
218
+ ["Smith", "Nuts", "Brooks", "Blanc"],
219
+ ["Suit", "Screwdriver", "Sidecar", "Margarita"],
220
+ ["Hammer", "Business", "Gimlet", "Gibson"]
221
+ ],
222
+ "predicted_connections": ["Famous Mels", "Household tools", "Cocktails", "Monkey ___"]
223
+ }]
224
+ ```
225
+
226
+ Note, only one of `"predicted_groups"` or `"predicted_connections"` is required. The other can be `null`. In the evaluate script, predicting groups is considered `"task1-grouping"` and predicting connections is considered `"task2-connections"`.
227
+
228
+ To run the evaluation script:
229
+
230
+ ```bash
231
+ python src/ocw/evaluate_only_connect.py \
232
+ --prediction-file "./predictions/task1.json" \
233
+ --dataset-path "./dataset/" \
234
+ --results-path "./results/" \
235
+ --task "task1-grouping"
236
+ ```
237
+
238
+ ### Downloading easy datasets for ablation studies
239
+
240
+ We also produced two "easy" versions of the dataset, designed to remove or dramatically reduce the number of red herrings, for abalation:
241
+
242
+ - A copy of the dataset where each wall in the test set is replaced with a _random_ selection of groups. No group is repeated twice, and no wall contains two copies of the same clue. The train and validation sets are unmodified. This dataset can be downloaded from [here](https://www.cs.toronto.edu/~taati/OCW/OCW_randomized.tar.gz) or with a bash script:
243
+
244
+ ```bash
245
+ bash download_OCW_randomized.sh
246
+ ```
247
+ - A copy of the dataset generated from WordNet by selecting equivalent synonyms for each clue in a group. This dataset can be downloaded from [here](https://www.cs.toronto.edu/~taati/OCW/OCW_wordnet.tar.gz) or with a bash script:
248
+
249
+ ```bash
250
+ bash download_OCW_wordnet.sh
251
+ ```
252
+
253
+ ### Running the baselines
254
+
255
+ #### Word Embeddings and Pre-trained Language Models
256
+
257
+ To run word embeddings and PLM baseline:
258
+
259
+ ```bash
260
+ python scripts/prediction.py \
261
+ --model-name "intfloat/e5-base-v2" \
262
+ --dataset-path "./dataset/" \
263
+ --predictions-path "./predictions/" \
264
+ --task "task1-grouping"
265
+ ```
266
+ The `model_name` should be from huggingface model hub or in `['elmo', 'glove', 'crawl', 'news']`.
267
+ To run contextualized embeddings in PLMs, use `--contextual` flag.
268
+
269
+ To plot the results:
270
+
271
+ ```bash
272
+ python scripts/plot.py \
273
+ --wall-id "8cde" \
274
+ --model-name "intfloat/e5-base-v2" \
275
+ --shuffle-seed 9
276
+ ```
277
+
278
+ #### Large Language Models
279
+
280
+ To run the few-shot in-context LLM baseline, see the [`run_openai.ipynb`](./notebooks/run_openai.ipynb) notebook. Note: this will require an OpenAI API key.
281
+
282
+ ## ✍️ Contributing
283
+
284
+ We welcome contributions to this repository (noticed a typo? a bug?). To propose a change:
285
+
286
+ ```
287
+ git clone https://github.com/salavina/OCW
288
+ cd OCW
289
+ git checkout -b my-branch
290
+ pip install -r requirements.txt
291
+ pip install -e .
292
+ ```
293
+
294
+ Once your changes are made, make sure to lint and format the code (addressing any warnings or errors):
295
+
296
+ ```
297
+ isort .
298
+ black .
299
+ flake8 .
300
+ ```
301
+
302
+ Then, submit your change as a pull request.
303
+
304
+ ## 📝 Citing
305
+
306
+ If you use the Only Connect dataset in your work, please consider citing our paper:
307
+
308
+ ```
309
+ @article{Naeini2023LargeLM,
310
+ title = {Large Language Models are Fixated by Red Herrings: Exploring Creative Problem Solving and Einstellung Effect using the Only Connect Wall Dataset},
311
+ author = {Saeid Alavi Naeini and Raeid Saqur and Mozhgan Saeidi and John Giorgi and Babak Taati},
312
+ year = 2023,
313
+ journal = {ArXiv},
314
+ volume = {abs/2306.11167},
315
+ url = {https://api.semanticscholar.org/CorpusID:259203717}
316
+ }
317
+ ```
318
+
319
+ ## 🙏 Acknowledgements
320
+
321
+ We would like the thank the maintainers and contributors of the fan-made and run website [https://ocdb.cc/](https://ocdb.cc/) for providing the data for this dataset. We would also like to thank the creators of the Only Connect quiz show for producing such an entertaining and thought-provoking show.
test.json ADDED
The diff for this file is too large to render. See raw diff
 
train.json ADDED
The diff for this file is too large to render. See raw diff
 
validation.json ADDED
The diff for this file is too large to render. See raw diff