Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
vanyacohen commited on
Commit
4509311
·
verified ·
1 Parent(s): 3fa7abc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -3
README.md CHANGED
@@ -1,3 +1,106 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # Dataset Card for CAT-BENCH
6
+
7
+ CAT-BENCH is a benchmark dataset designed to evaluate large language models' (LLMs) understanding of causal and temporal dependencies in natural language plans, specifically in cooking recipes. It consists of questions that test whether one step must necessarily occur before or after another, requiring reasoning about preconditions, effects, and the overall structure of the plan.
8
+
9
+ ## Dataset Details
10
+
11
+ ### Dataset Description
12
+
13
+ CAT-BENCH (Causal and Temporal Benchmark) is aimed at assessing the ability of language models to reason about causal and temporal relationships within natural language plans. The dataset is constructed from cooking recipes and contains 4,260 questions about causal dependencies spanning 57 unique plans. Each question asks whether a particular step in a recipe must occur before or after another step, challenging models to understand the underlying causal and temporal structure.
14
+
15
+ - **Curated by:** Yash Kumar Lal*, Vanya Cohen*, Nathanael Chambers, Niranjan Balasubramanian, Raymond Mooney (* equal contribution)
16
+ - **Funded by [optional]:** DARPA KAIROS program under agreement number FA8750-19-2-1003, National Science Foundation under award IIS #2007290, DARPA's Perceptually-enabled Task Guidance (PTG) program under Contract No. HR001122C007
17
+ - **Shared by [optional]:** [More Information Needed]
18
+ - **Language(s) (NLP):** English
19
+ - **License:** Apache License 2.0
20
+
21
+ ### Dataset Sources [optional]
22
+
23
+ - **Repository:** [More Information Needed]
24
+ - **Paper [optional]:** [Link to the arXiv paper when available]
25
+
26
+ ## Uses
27
+
28
+ ### Direct Use
29
+
30
+ CAT-BENCH is intended for evaluating and benchmarking the performance of language models on tasks requiring causal and temporal reasoning within natural language plans. Researchers and practitioners can use this dataset to assess how well models understand dependencies between steps in a plan, such as preconditions and effects, and improve model architectures or training methods accordingly.
31
+
32
+ ### Out-of-Scope Use
33
+
34
+ The dataset is not designed for tasks unrelated to causal and temporal reasoning in plans, such as general text classification, sentiment analysis, or other NLP tasks. Misuse includes employing the dataset for purposes that do not align with its intended use or drawing conclusions beyond the scope of what the dataset can support.
35
+
36
+ ## Dataset Structure
37
+
38
+ The dataset consists of:
39
+
40
+ - **Plans:** 57 unique cooking recipes.
41
+ - **Questions:** 4,260 binary (yes/no) questions about step dependencies.
42
+ - **Annotations:** Questions are labeled as dependent (steps are dependent) or non-dependent (steps are independent).
43
+
44
+ ## Dataset Creation
45
+
46
+ ### Curation Rationale
47
+
48
+ The dataset was created to address the need for evaluating language models' understanding of causal and temporal dependencies in natural language plans. While LLMs have shown impressive generative capabilities, their ability to comprehend and reason about the structure and dependencies within plans is less understood. CAT-BENCH aims to fill this gap by providing a benchmark specifically focused on this aspect.
49
+
50
+ ### Source Data
51
+
52
+ #### Data Collection and Processing
53
+
54
+ The dataset is based on the Recipe Flow Graph Corpus by Yamakata et al. (2020), which contains 300 English cooking recipes annotated with substep procedure dependencies. From this corpus:
55
+
56
+ - **Selection:** 57 recipes were selected for inclusion.
57
+ - **Question Generation:** For each ordered pair of steps, two binary questions were created regarding the necessity of one step occurring before or after another.
58
+ - **Balancing:** The dataset was balanced to have an equal number of dependent and non-dependent questions.
59
+ - **Annotations:** Steps were annotated based on whether there is a directed path between them in the recipe's dependency graph.
60
+
61
+ #### Who are the source data producers?
62
+
63
+ The source data originates from the Recipe Flow Graph Corpus created by Yamakata et al. (2020). The recipes were collected from publicly available sources, and the dependency annotations were provided by the original authors.
64
+
65
+ ### Annotations [optional]
66
+
67
+ #### Annotation process
68
+
69
+ Annotations were derived from the existing dependency graphs in the Recipe Flow Graph Corpus. Additional annotations include labeling question types and step distances. For human evaluation:
70
+
71
+ - **Crowdsourcing:** Crowd-sourced annotators were employed to rate model-generated explanations.
72
+ - **Guidelines:** Annotators were provided with standardized instructions and examples.
73
+ - **Inter-Annotator Agreement:** Measured using weighted Fleiss Kappa, indicating high agreement among annotators.
74
+
75
+ #### Who are the annotators?
76
+
77
+ The initial dependency annotations were created by the authors of the Recipe Flow Graph Corpus. For human evaluation of model explanations, annotators were U.S.-based crowd workers from Amazon Mechanical Turk with high approval ratings.
78
+
79
+ #### Personal and Sensitive Information
80
+
81
+ The dataset does not contain any personal, sensitive, or private information. All data are based on publicly available cooking recipes and contain no personally identifiable information.
82
+
83
+ ## Bias, Risks, and Limitations
84
+
85
+ - **Domain Limitation:** The dataset focuses solely on cooking recipes, which may limit the generalizability of findings to other domains involving plans and procedures.
86
+ - **Language Limitation:** All data are in English, which may not reflect language models' performance in other languages.
87
+ - **Model Biases:** Language models evaluated using this dataset may exhibit biases or deficiencies not captured by the dataset, such as over-reliance on temporal order heuristics.
88
+
89
+ ### Recommendations
90
+
91
+ Users should be cautious when generalizing results from this dataset to other domains or languages. It's recommended to use CAT-BENCH in conjunction with other benchmarks to get a comprehensive evaluation of a model's reasoning capabilities. Researchers should also be aware of potential biases in language models and consider additional evaluations for fairness and robustness.
92
+
93
+ <!-- ## Citation [optional]
94
+
95
+ **BibTeX:**
96
+
97
+ ```bibtex
98
+ @misc{lal2024catbench,
99
+ title={CAT-BENCH: Benchmarking Language Model Understanding of Causal and Temporal Dependencies in Plans},
100
+ author={Yash Kumar Lal* and Vanya Cohen* and Nathanael Chambers and Niranjan Balasubramanian and Raymond Mooney},
101
+ year={2024},
102
+ eprint={arXiv:xxxx.xxxxx},
103
+ archivePrefix={arXiv},
104
+ primaryClass={cs.CL}
105
+ }
106
+ ``` -->