vanyacohen commited on
Commit
237abcc
·
verified ·
1 Parent(s): befe4d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -16
README.md CHANGED
@@ -7,15 +7,15 @@ language:
7
  pretty_name: CaT-Bench
8
  ---
9
 
10
- # Dataset Card for CaT-BENCH
11
 
12
- CaT-BENCH is a benchmark dataset designed to evaluate large language models' (LLMs) understanding of causal and temporal dependencies in natural language plans, specifically in cooking recipes. It consists of questions that test whether one step must necessarily occur before or after another, requiring reasoning about preconditions, effects, and the overall structure of the plan.
13
 
14
  ## Dataset Details
15
 
16
  ### Dataset Description
17
 
18
- CaT-BENCH (Causal and Temporal Benchmark) is aimed at assessing the ability of language models to reason about causal and temporal relationships within natural language plans. The dataset is constructed from cooking recipes and contains 4,260 questions about causal dependencies spanning 57 unique plans. Each question asks whether a particular step in a recipe must occur before or after another step, challenging models to understand the underlying causal and temporal structure.
19
 
20
  - **Curated by:** Yash Kumar Lal*, Vanya Cohen*, Nathanael Chambers, Niranjan Balasubramanian, Raymond Mooney (* equal contribution)
21
  - **Funded by:** DARPA KAIROS program under agreement number FA8750-19-2-1003, National Science Foundation under award IIS #2007290, DARPA's Perceptually-enabled Task Guidance (PTG) program under Contract No. HR001122C007
@@ -23,7 +23,7 @@ CaT-BENCH (Causal and Temporal Benchmark) is aimed at assessing the ability of l
23
  - **Language(s) (NLP):** English
24
  - **License:** Apache License 2.0
25
 
26
- ### Dataset Sources [optional]
27
 
28
  - **Repository:** [More Information Needed]
29
  - **Paper:** [Link to the arXiv paper when available]
@@ -32,7 +32,7 @@ CaT-BENCH (Causal and Temporal Benchmark) is aimed at assessing the ability of l
32
 
33
  ### Direct Use
34
 
35
- CaT-BENCH is intended for evaluating and benchmarking the performance of language models on tasks requiring causal and temporal reasoning within natural language plans. Researchers and practitioners can use this dataset to assess how well models understand dependencies between steps in a plan, such as preconditions and effects, and improve model architectures or training methods accordingly.
36
 
37
  ## Dataset Structure
38
 
@@ -52,7 +52,7 @@ The dataset was created to address the need for evaluating language models' unde
52
 
53
  #### Data Collection and Processing
54
 
55
- The dataset is based on the Recipe Flow Graph Corpus by Yamakata et al. (2020), which contains 300 English cooking recipes annotated with substep procedure dependencies. From this corpus:
56
 
57
  - **Selection:** 57 recipes were selected for inclusion.
58
  - **Question Generation:** For each ordered pair of steps, two binary questions were created regarding the necessity of one step occurring before or after another.
@@ -61,9 +61,7 @@ The dataset is based on the Recipe Flow Graph Corpus by Yamakata et al. (2020),
61
 
62
  #### Who are the source data producers?
63
 
64
- The source data originates from the Recipe Flow Graph Corpus created by Yamakata et al. (2020). The recipes were collected from publicly available sources, and the dependency annotations were provided by the original authors.
65
-
66
- ### Annotations [optional]
67
 
68
  #### Annotation process
69
 
@@ -73,10 +71,6 @@ Annotations were derived from the existing dependency graphs in the Recipe Flow
73
  - **Guidelines:** Annotators were provided with standardized instructions and examples.
74
  - **Inter-Annotator Agreement:** Measured using weighted Fleiss Kappa, indicating high agreement among annotators.
75
 
76
- #### Who are the annotators?
77
-
78
- The initial dependency annotations were created by the authors of the Recipe Flow Graph Corpus. For human evaluation of model explanations, annotators were U.S.-based crowd workers from Amazon Mechanical Turk with high approval ratings.
79
-
80
  #### Personal and Sensitive Information
81
 
82
  The dataset does not contain any personal, sensitive, or private information. All data are based on publicly available cooking recipes and contain no personally identifiable information.
@@ -89,12 +83,14 @@ The dataset does not contain any personal, sensitive, or private information. Al
89
 
90
  ### Recommendations
91
 
92
- Users should be cautious when generalizing results from this dataset to other domains or languages. It's recommended to use CAT-BENCH in conjunction with other benchmarks to get a comprehensive evaluation of a model's reasoning capabilities. Researchers should also be aware of potential biases in language models and consider additional evaluations for fairness and robustness.
93
 
94
- <!-- ## Citation [optional]
95
 
96
  **BibTeX:**
97
 
 
 
98
  ```bibtex
99
  @misc{lal2024catbench,
100
  title={CaT-BENCH: Benchmarking Language Model Understanding of Causal and Temporal Dependencies in Plans},
@@ -104,4 +100,34 @@ Users should be cautious when generalizing results from this dataset to other do
104
  archivePrefix={arXiv},
105
  primaryClass={cs.CL}
106
  }
107
- ``` -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  pretty_name: CaT-Bench
8
  ---
9
 
10
+ # Dataset Card for CaT-Bench
11
 
12
+ CaT-Bench is a benchmark dataset designed to evaluate large language models' (LLMs) understanding of causal and temporal dependencies in natural language plans, specifically in cooking recipes. It consists of questions that test whether one step must necessarily occur before or after another, requiring reasoning about preconditions, effects, and the overall structure of the plan.
13
 
14
  ## Dataset Details
15
 
16
  ### Dataset Description
17
 
18
+ CaT-Bench (Causal and Temporal Benchmark) is aimed at assessing the ability of language models to reason about causal and temporal relationships within natural language plans. The dataset is constructed from cooking recipes and contains 4,260 questions about causal dependencies spanning 57 unique plans. Each question asks whether a particular step in a recipe must occur before or after another step, challenging models to understand the underlying causal and temporal structure.
19
 
20
  - **Curated by:** Yash Kumar Lal*, Vanya Cohen*, Nathanael Chambers, Niranjan Balasubramanian, Raymond Mooney (* equal contribution)
21
  - **Funded by:** DARPA KAIROS program under agreement number FA8750-19-2-1003, National Science Foundation under award IIS #2007290, DARPA's Perceptually-enabled Task Guidance (PTG) program under Contract No. HR001122C007
 
23
  - **Language(s) (NLP):** English
24
  - **License:** Apache License 2.0
25
 
26
+ ### Dataset Sources
27
 
28
  - **Repository:** [More Information Needed]
29
  - **Paper:** [Link to the arXiv paper when available]
 
32
 
33
  ### Direct Use
34
 
35
+ CaT-Bench is intended for evaluating and benchmarking the performance of language models on tasks requiring causal and temporal reasoning within natural language plans. Researchers and practitioners can use this dataset to assess how well models understand dependencies between steps in a plan, such as preconditions and effects, and improve model architectures or training methods accordingly.
36
 
37
  ## Dataset Structure
38
 
 
52
 
53
  #### Data Collection and Processing
54
 
55
+ The dataset is based on the English Recipe Flow Graph Corpus by Yamakata et al. (2020), which contains 300 English cooking recipes annotated with substep procedure dependencies. From this corpus:
56
 
57
  - **Selection:** 57 recipes were selected for inclusion.
58
  - **Question Generation:** For each ordered pair of steps, two binary questions were created regarding the necessity of one step occurring before or after another.
 
61
 
62
  #### Who are the source data producers?
63
 
64
+ The source data originates from the English Recipe Flow Graph Corpus created by Yamakata et al. (2020). The recipes were collected from publicly available sources, and the dependency annotations were provided by the original authors.
 
 
65
 
66
  #### Annotation process
67
 
 
71
  - **Guidelines:** Annotators were provided with standardized instructions and examples.
72
  - **Inter-Annotator Agreement:** Measured using weighted Fleiss Kappa, indicating high agreement among annotators.
73
 
 
 
 
 
74
  #### Personal and Sensitive Information
75
 
76
  The dataset does not contain any personal, sensitive, or private information. All data are based on publicly available cooking recipes and contain no personally identifiable information.
 
83
 
84
  ### Recommendations
85
 
86
+ Users should be cautious when generalizing results from this dataset to other domains or languages. It's recommended to use CaT-Bench in conjunction with other benchmarks to get a comprehensive evaluation of a model's reasoning capabilities. Researchers should also be aware of potential biases in language models and consider additional evaluations for fairness and robustness.
87
 
88
+ ## Citation
89
 
90
  **BibTeX:**
91
 
92
+ Please cite both this benchmark and the original English Recipe Flow Graph Corpus dataset.
93
+
94
  ```bibtex
95
  @misc{lal2024catbench,
96
  title={CaT-BENCH: Benchmarking Language Model Understanding of Causal and Temporal Dependencies in Plans},
 
100
  archivePrefix={arXiv},
101
  primaryClass={cs.CL}
102
  }
103
+
104
+ @inproceedings{yamakata-etal-2020-english,
105
+ title = "{E}nglish Recipe Flow Graph Corpus",
106
+ author = "Yamakata, Yoko and
107
+ Mori, Shinsuke and
108
+ Carroll, John",
109
+ editor = "Calzolari, Nicoletta and
110
+ B{\'e}chet, Fr{\'e}d{\'e}ric and
111
+ Blache, Philippe and
112
+ Choukri, Khalid and
113
+ Cieri, Christopher and
114
+ Declerck, Thierry and
115
+ Goggi, Sara and
116
+ Isahara, Hitoshi and
117
+ Maegaard, Bente and
118
+ Mariani, Joseph and
119
+ Mazo, H{\'e}l{\`e}ne and
120
+ Moreno, Asuncion and
121
+ Odijk, Jan and
122
+ Piperidis, Stelios",
123
+ booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
124
+ month = may,
125
+ year = "2020",
126
+ address = "Marseille, France",
127
+ publisher = "European Language Resources Association",
128
+ url = "https://aclanthology.org/2020.lrec-1.638",
129
+ pages = "5187--5194",
130
+ language = "English",
131
+ ISBN = "979-10-95546-34-4",
132
+ }
133
+ ```