Datasets:

Modalities:
Text
Formats:
json
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:

Made readme more human-friendly, added example figures and link to original paper.

#3
Files changed (1) hide show
  1. README.md +35 -7
README.md CHANGED
@@ -1,17 +1,45 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
4
- # NYT-Connections
5
- This repository contains the `NYT-Connections` dataset proposed in the work *NYT-Connections: A Deceptively Simple Text Classification Task that Stumps System-1 Thinkers*. This work will be published in the 31st International Conference on Computational Linguistics in January of 2025.
6
 
7
  Authors: Angel Yahir Loredo Lopez, Tyler McDonald, Ali Emami
8
- ## Paper Abstract
9
  Large Language Models (LLMs) have shown impressive performance on various benchmarks, yet their ability to engage in deliberate reasoning remains questionable. We present NYT-Connections, a collection of 358 simple word classification puzzles derived from the New York Times Connections game. This benchmark is designed to penalize quick, intuitive ``System 1'' thinking, isolating fundamental reasoning skills. We evaluated six recent LLMs, a simple machine learning heuristic, and humans across three configurations: single-attempt, multiple attempts without hints, and multiple attempts with contextual hints. Our findings reveal a significant performance gap: even top-performing LLMs like GPT-4 fall short of human performance by nearly 30\%. Notably, advanced prompting techniques such as Chain-of-Thought and Self-Consistency show diminishing returns as task difficulty increases. NYT-Connections uniquely combines linguistic isolation, resistance to intuitive shortcuts, and regular updates to mitigate data leakage, offering a novel tool for assessing LLM reasoning capabilities.
10
 
11
- ## Puzzle Description
12
- *NYT-Connections* puzzles are a subset of the New York Times' daily *Connections* contests. Each puzzle consists of 16 words, with the goal being to sort these words into 4 correct groupings of varying difficulty. The base game allows for hints when a solution is one word away from being a correct group, and allows up to 4 mistakes. Thus, the goal is to correctly identify all 4 groups without committing 4 mistakes.
13
 
14
- ## Data Description
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  `date` - the original date the contest was offered.
16
 
17
  `contest` - the title string for the contest.
@@ -25,7 +53,7 @@ Large Language Models (LLMs) have shown impressive performance on various benchm
25
 
26
  `difficulty` - the difficulty of the puzzle as rated by community contributors (should such a rating be obtained, otherwise `null`).
27
 
28
- ## Citation
29
  ```
30
  @misc{lopez2024nytconnectionsdeceptivelysimpletext,
31
  title={NYT-Connections: A Deceptively Simple Text Classification Task that Stumps System-1 Thinkers},
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+ # 🧩 NYT-Connections
5
+ This repository contains the `NYT-Connections` dataset proposed in the work [*NYT-Connections: A Deceptively Simple Text Classification Task that Stumps System-1 Thinkers*](https://arxiv.org/abs/2412.01621). This work was **published at the 31st International Conference on Computational Linguistics (COLING 2025)** and was honored with the *Best Dataset Paper* award.
6
 
7
  Authors: Angel Yahir Loredo Lopez, Tyler McDonald, Ali Emami
8
+ ## 📜 Paper Abstract
9
  Large Language Models (LLMs) have shown impressive performance on various benchmarks, yet their ability to engage in deliberate reasoning remains questionable. We present NYT-Connections, a collection of 358 simple word classification puzzles derived from the New York Times Connections game. This benchmark is designed to penalize quick, intuitive ``System 1'' thinking, isolating fundamental reasoning skills. We evaluated six recent LLMs, a simple machine learning heuristic, and humans across three configurations: single-attempt, multiple attempts without hints, and multiple attempts with contextual hints. Our findings reveal a significant performance gap: even top-performing LLMs like GPT-4 fall short of human performance by nearly 30\%. Notably, advanced prompting techniques such as Chain-of-Thought and Self-Consistency show diminishing returns as task difficulty increases. NYT-Connections uniquely combines linguistic isolation, resistance to intuitive shortcuts, and regular updates to mitigate data leakage, offering a novel tool for assessing LLM reasoning capabilities.
10
 
 
 
11
 
12
+ ## 🎯 Puzzle Description
13
+ NYT-Connections puzzles are based on the **New York Times' daily Connections game**.
14
+
15
+ Each puzzle consists of 16 words, and the goal is to **group them into 4 correct categories**.
16
+
17
+ 💡 **How does it work?**
18
+ ✅ You can receive hints when only **one word is misplaced** in a group.
19
+ ❌ You can make **up to 3 mistakes** before failing.
20
+ 🏆 The objective is to correctly classify all 4 groups.
21
+
22
+
23
+ ### 🧩 **Example**
24
+ Let’s take a look at an example puzzle. Below, you’ll see **16 words** that need to be grouped:
25
+
26
+ <p align="center">
27
+ <img src="https://images2.imgbox.com/39/cd/Rb0FypjS_o.png" alt="Three Setups Explanation" width="500">
28
+ </p>
29
+
30
+ Each color represents a different correct group, but the relationships between words are not always obvious. This is where **System 1 vs. System 2 thinking** comes into play—solvers must go beyond intuition and **apply logical reasoning**.
31
+
32
+ A completed match typically follows this structure:
33
+
34
+ <p align="center">
35
+ <img src="https://images2.imgbox.com/67/87/oQ3sLnVs_o.png" alt="Ground Truth Example" width="700">
36
+ </p>
37
+
38
+ The three configurations (**full-hints, no-hints, one-try**) in our paper differ based on how much of the original game mechanics we retain. The **full-hints** configuration is the closest to the official *New York Times* version.
39
+
40
+ ---
41
+
42
+ ## 📂 Data Description
43
  `date` - the original date the contest was offered.
44
 
45
  `contest` - the title string for the contest.
 
53
 
54
  `difficulty` - the difficulty of the puzzle as rated by community contributors (should such a rating be obtained, otherwise `null`).
55
 
56
+ ## 📖 Citation
57
  ```
58
  @misc{lopez2024nytconnectionsdeceptivelysimpletext,
59
  title={NYT-Connections: A Deceptively Simple Text Classification Task that Stumps System-1 Thinkers},