parquet-converter commited on
Commit
d8d1079
1 Parent(s): b9ffbbe

Update parquet files

Browse files
README.md DELETED
@@ -1,147 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language:
5
- - en
6
- language_creators:
7
- - found
8
- license: []
9
- multilinguality:
10
- - monolingual
11
- pretty_name: CausalQA
12
- size_categories:
13
- - 1M<n<10M
14
- source_datasets:
15
- - original
16
- tags:
17
- - question-answering
18
- - english
19
- - causal
20
- task_categories:
21
- - question-answering
22
- task_ids:
23
- - extractive-qa
24
- ---
25
-
26
- # Dataset Card for [Dataset Name]
27
-
28
- ## Table of Contents
29
- - [Table of Contents](#table-of-contents)
30
- - [Dataset Description](#dataset-description)
31
- - [Dataset Summary](#dataset-summary)
32
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
33
- - [Languages](#languages)
34
- - [Dataset Structure](#dataset-structure)
35
- - [Data Instances](#data-instances)
36
- - [Data Fields](#data-fields)
37
- - [Data Splits](#data-splits)
38
- - [Dataset Creation](#dataset-creation)
39
- - [Curation Rationale](#curation-rationale)
40
- - [Source Data](#source-data)
41
- - [Annotations](#annotations)
42
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
- - [Considerations for Using the Data](#considerations-for-using-the-data)
44
- - [Social Impact of Dataset](#social-impact-of-dataset)
45
- - [Discussion of Biases](#discussion-of-biases)
46
- - [Other Known Limitations](#other-known-limitations)
47
- - [Additional Information](#additional-information)
48
- - [Dataset Curators](#dataset-curators)
49
- - [Licensing Information](#licensing-information)
50
- - [Citation Information](#citation-information)
51
- - [Contributions](#contributions)
52
-
53
- ## Dataset Description
54
-
55
- - **Homepage:**
56
- - **Repository:**
57
- - **Paper:**
58
- - **Leaderboard:**
59
- - **Point of Contact:**
60
-
61
- ### Dataset Summary
62
-
63
- [More Information Needed]
64
-
65
- ### Supported Tasks and Leaderboards
66
-
67
- [More Information Needed]
68
-
69
- ### Languages
70
-
71
- [More Information Needed]
72
-
73
- ## Dataset Structure
74
-
75
- ### Data Instances
76
-
77
- [More Information Needed]
78
-
79
- ### Data Fields
80
-
81
- [More Information Needed]
82
-
83
- ### Data Splits
84
-
85
- [More Information Needed]
86
-
87
- ## Dataset Creation
88
-
89
- ### Curation Rationale
90
-
91
- [More Information Needed]
92
-
93
- ### Source Data
94
-
95
- #### Initial Data Collection and Normalization
96
-
97
- [More Information Needed]
98
-
99
- #### Who are the source language producers?
100
-
101
- [More Information Needed]
102
-
103
- ### Annotations
104
-
105
- #### Annotation process
106
-
107
- [More Information Needed]
108
-
109
- #### Who are the annotators?
110
-
111
- [More Information Needed]
112
-
113
- ### Personal and Sensitive Information
114
-
115
- [More Information Needed]
116
-
117
- ## Considerations for Using the Data
118
-
119
- ### Social Impact of Dataset
120
-
121
- [More Information Needed]
122
-
123
- ### Discussion of Biases
124
-
125
- [More Information Needed]
126
-
127
- ### Other Known Limitations
128
-
129
- [More Information Needed]
130
-
131
- ## Additional Information
132
-
133
- ### Dataset Curators
134
-
135
- [More Information Needed]
136
-
137
- ### Licensing Information
138
-
139
- [More Information Needed]
140
-
141
- ### Citation Information
142
-
143
- [More Information Needed]
144
-
145
- ### Contributions
146
-
147
- Thanks to [@alamhanz](https://github.com/alamhanz) and [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
causalqa.py DELETED
@@ -1,119 +0,0 @@
1
- """Causal QA : """
2
- import os
3
- import sys
4
- import json
5
- import csv
6
- import yaml
7
- import urllib3
8
-
9
- import datasets
10
-
11
- class CausalqaConfig(datasets.BuilderConfig):
12
- """BuilderConfig for causalqa."""
13
-
14
- def __init__(
15
- self,
16
- data_features,
17
- data_url,
18
- citation,
19
- **kwargs
20
- ):
21
- """BuilderConfig for GLUE.
22
- Args:
23
- data_features: `dict[string, string]`, map from the name of the feature
24
- dict for each text field to the name of the column in the tsv file
25
- data_url: `dict[string, string]`, url to download the zip file from
26
- citation: `string`, citation for the data set
27
- process_label: `Function[string, any]`, function taking in the raw value
28
- of the label and processing it to the form required by the label feature
29
- **kwargs: keyword arguments forwarded to super.
30
- """
31
- super(CausalqaConfig, self).__init__(**kwargs)
32
- self.data_features = data_features
33
- self.data_url = data_url
34
- self.citation = citation
35
-
36
- def OneBuild(data_info, feat_meta):
37
- main_name = [*data_info][0]
38
- submain_name = data_info[main_name].keys()
39
- all_config = []
40
- for k in submain_name:
41
- fm_temp = feat_meta[main_name][k]
42
- one_data_info = data_info[main_name][k]
43
- cqa_config = CausalqaConfig(
44
- name="{}.{}".format(main_name,k),
45
- description=one_data_info["description"],
46
- version=datasets.Version(one_data_info["version"], ""),
47
- data_features=fm_temp,
48
- data_url=one_data_info["url_data"],
49
- citation=one_data_info["citation"]
50
- )
51
- all_config.append(cqa_config)
52
- return all_config
53
-
54
-
55
- class CausalQA(datasets.GeneratorBasedBuilder):
56
- """CausalQA: An QA causal type dataset."""
57
-
58
- http = urllib3.PoolManager()
59
-
60
- _PATH_METADATA_RES = http.request('GET', 'https://huggingface.co/datasets/jakartaresearch/causalqa/raw/main/source/features_metadata.yaml')
61
- _FILE_URL_RES = http.request('GET', 'https://huggingface.co/datasets/jakartaresearch/causalqa/raw/main/source/dataset_info.json')
62
- _FILE_URL = json.loads(_FILE_URL_RES.data.decode("utf-8"))
63
- _PATH_DESCRIPTION_RES = http.request('GET', 'https://huggingface.co/datasets/jakartaresearch/causalqa/raw/main/source/dataset_description.txt')
64
- _CAUSALQA_DESCRIPTION = _PATH_DESCRIPTION_RES.data.decode("utf-8")
65
-
66
- _HOMEPAGE = _FILE_URL['homepage']
67
- all_files = _FILE_URL['files']
68
-
69
- try:
70
- fmeta = yaml.safe_load(_PATH_METADATA_RES.data)
71
- except yaml.YAMLError as exc:
72
- print(exc)
73
-
74
- BUILDER_CONFIGS = []
75
- for f in all_files:
76
- BUILDER_CONFIGS += (OneBuild(f, fmeta))
77
-
78
-
79
- def _info(self):
80
- self.features = {feat: datasets.Value(self.config.data_features[feat])
81
- for feat in self.config.data_features}
82
-
83
- return datasets.DatasetInfo(
84
- description=self._CAUSALQA_DESCRIPTION,
85
- features=datasets.Features(self.features),
86
- homepage=self._HOMEPAGE
87
- )
88
-
89
- def _split_generators(self, dl_manager):
90
- data_train = dl_manager.download(self.config.data_url['train'])
91
- data_val = dl_manager.download(self.config.data_url['val'])
92
- return [
93
- datasets.SplitGenerator(
94
- name=datasets.Split.TRAIN,
95
- gen_kwargs={
96
- "filepath": data_train
97
- },
98
- ),
99
- datasets.SplitGenerator(
100
- name=datasets.Split.VALIDATION,
101
- gen_kwargs={
102
- "filepath": data_val ## keys (as parameters) is used during generate example
103
- },
104
- )
105
- ]
106
-
107
- def _generate_examples(self, filepath):
108
- """Generate examples."""
109
- csv.field_size_limit(1000000000)
110
- with open(filepath, encoding="utf-8") as csv_file:
111
- csv_reader = csv.reader(csv_file, delimiter=",")
112
- next(csv_reader)
113
-
114
- ## the yield depends on files features
115
- for id_, row in enumerate(csv_reader):
116
- existing_values = row
117
- feature_names = [*self.features]
118
- one_example_row = dict(zip(feature_names, existing_values))
119
- yield id_, one_example_row
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"eli5.original-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "eli5.original-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1174541, "num_examples": 117929, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 130497, "num_examples": 13104, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=1-FKsZknoDE7bh0fKucs8nNu72IKFxfuo": {"num_bytes": 820757, "checksum": "92304a6f44fee943c29e67c0097dbe287e4a420c101fb865d4d8d6098299a8c2"}, "https://drive.google.com/uc?id=108bG1CJMaqANIqLvxthuQvsZro-5qwbX": {"num_bytes": 91188, "checksum": "37d29f228db3a35e871c2124fe0b854f5a399951cda0a877891e7180ee080884"}}, "download_size": 911945, "post_processing_size": null, "dataset_size": 1305038, "size_in_bytes": 2216983}, "eli5.random-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "eli5.random-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1174541, "num_examples": 117929, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 130497, "num_examples": 13104, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=11FfVASpa-Agie4AJfEPdjkR-QrvEO5Sd": {"num_bytes": 820757, "checksum": "92304a6f44fee943c29e67c0097dbe287e4a420c101fb865d4d8d6098299a8c2"}, "https://drive.google.com/uc?id=11XlFn79xGSGmPu-TU2SYariQgg2bby2h": {"num_bytes": 91188, "checksum": "37d29f228db3a35e871c2124fe0b854f5a399951cda0a877891e7180ee080884"}}, "download_size": 911945, "post_processing_size": null, "dataset_size": 1305038, "size_in_bytes": 2216983}, "gooaq.original-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "int64", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "gooaq.original-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 95226834, "num_examples": 146253, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 20122, "num_examples": 33, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=10A8rPzMECn3s-qXSZRGKnlDMx5h6-U6k": {"num_bytes": 93129949, "checksum": "9cd916d9ab41e70df0f6feec2d5818dfe733209cfa4803e5cfb033ebbba0133c"}, "https://drive.google.com/uc?id=10PNaYwLxFBfjm2AMM65sp-kRs9rE-8_A": {"num_bytes": 19740, "checksum": "3c127cd582e52889b7a78dc7084333d646019054a39ca9d6849d2ea8fa156a6f"}}, "download_size": 93149689, "post_processing_size": null, "dataset_size": 95246956, "size_in_bytes": 188396645}, "gooaq.random-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "int64", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "gooaq.random-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 85723336, "num_examples": 131657, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 9523620, "num_examples": 14629, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=10cZ6Kx7v_UHzHkvBgduoiJ7Ny3aeO-Ww": {"num_bytes": 83835750, "checksum": "1e72feb89d26e6fc2bac718115c4a8aca63f6f6278d75585952e61976ea9dd77"}, "https://drive.google.com/uc?id=11YdwCGTaw7jKk612tRc2T6NSYRUgmAjk": {"num_bytes": 9313939, "checksum": "0e48376bde7400059cc874aca3864e39909ad51a6dbb97ba6ff5e2d7b05ab331"}}, "download_size": 93149689, "post_processing_size": null, "dataset_size": 95246956, "size_in_bytes": 188396645}, "hotpotqa.original-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "hotpotqa.original-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4465930, "num_examples": 355, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 437034, "num_examples": 35, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=103bDQK53aT1wIFbJqe9B8Usi0_TCROBT": {"num_bytes": 4484332, "checksum": "0b954243ecbf904cc3ae0b7b85b922380f2febd6a802fb5ab04ac313198f1705"}, "https://drive.google.com/uc?id=1-ZSJNCjxdh5wEifkVVseieYYu45tVbEN": {"num_bytes": 438660, "checksum": "2f5151490204e71fcb63d17288c2907666125ca1f35601935d2e8a7101df100f"}}, "download_size": 4922992, "post_processing_size": null, "dataset_size": 4902964, "size_in_bytes": 9825956}, "hotpotqa.random-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "hotpotqa.random-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4393383, "num_examples": 351, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 509581, "num_examples": 39, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=11GYpcyT98XTomZhQxai5OLrTFoTGrfJJ": {"num_bytes": 4411273, "checksum": "6ba37d116a3bd64e88e63a613e7d74122ec749e8ee9195dc8c90ced03b1bf57c"}, "https://drive.google.com/uc?id=11B-cH_N8VIyLFCyM_l4dWfkOURXf4ky-": {"num_bytes": 511719, "checksum": "992d67501169f1624572d897eda080f4bb08df1321bba18e64f559473156a9e9"}}, "download_size": 4922992, "post_processing_size": null, "dataset_size": 4902964, "size_in_bytes": 9825956}, "msmarco.original-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "int64", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "msmarco.original-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 169604379, "num_examples": 23011, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 17559831, "num_examples": 2558, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=1-jMNHG6rS9b6TnRZ6iNbjRLXRwr8Znb_": {"num_bytes": 169308630, "checksum": "6d08c5da8205fd0ea8313d0e9bcc032b88ee5c53ce9a96081659be57a5157d61"}, "https://drive.google.com/uc?id=1-BtYcEWwgaD0aI5hHCHFXCZOZu8I2e8o": {"num_bytes": 17527966, "checksum": "01cea9955ec48381b9933179b6174642f65be72148b286128a5c0bbe89e25005"}}, "download_size": 186836596, "post_processing_size": null, "dataset_size": 187164210, "size_in_bytes": 374000806}, "msmarco.random-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "int64", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "msmarco.random-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 168417032, "num_examples": 23012, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 18827061, "num_examples": 2557, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=111RXRWuNk2CMhDbzY2gIudmcB70lsQSy": {"num_bytes": 168122339, "checksum": "6be361f3386c2e30ed59572b7a25bec3afafef65da9a19ded4c6efaa79f43f50"}, "https://drive.google.com/uc?id=11OLGeEkS6Wkv3q5ObNAinim5hqB5UU4D": {"num_bytes": 18794406, "checksum": "426c0716422d6298caeda64663ca29d0758e007d32b00849064003cdb07b40c2"}}, "download_size": 186916745, "post_processing_size": null, "dataset_size": 187244093, "size_in_bytes": 374160838}, "naturalquestions.original-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "int64", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "naturalquestions.original-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 111644341, "num_examples": 1137, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 7074666, "num_examples": 71, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=1-M_G-V7p0XvGmw3JWtOSfeAlrGLMu8U3": {"num_bytes": 111709402, "checksum": "7f74571bec9b0f55c5c527d831e547e0f860e3c241a5b06e7a6de5148deecd03"}, "https://drive.google.com/uc?id=1-hjnE4TvEp76eznP14DHIsqYnitY0rNW": {"num_bytes": 7078462, "checksum": "90b0130b16bdd3e429cbc07b094270748327127f841538a98d8dda1ac83f6897"}}, "download_size": 118787864, "post_processing_size": null, "dataset_size": 118719007, "size_in_bytes": 237506871}, "naturalquestions.random-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "int64", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "naturalquestions.random-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 107149045, "num_examples": 1087, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 11573438, "num_examples": 121, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=118ALG23_Ayi7qrAAdExJKLiN21Xy7VO5": {"num_bytes": 107204249, "checksum": "993fc8e988edcc5b85984578adc77c03e42699b1ce5244e19eb2918a480a5d5e"}, "https://drive.google.com/uc?id=11L8JW9llwDI-vg2LSXL4NmnHbOOHETBr": {"num_bytes": 11587103, "checksum": "bd113addf691d20b45f007b80950dec7f74b36dc860b5eb2b05449a623a16dc8"}}, "download_size": 118791352, "post_processing_size": null, "dataset_size": 118722483, "size_in_bytes": 237513835}, "newsqa.original-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "newsqa.original-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4671092, "num_examples": 623, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 253141, "num_examples": 29, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=1-hvqrR98PcajkorCvFvqz4fcuinZfgvr": {"num_bytes": 4680426, "checksum": "7a92c04b59de7fd565c6bcb9f70285638d9796a53fd7c7138df9016f22b78c6f"}, "https://drive.google.com/uc?id=1-oZbc9QFvuDfxzDhwotOYwvcLfFbCJQC": {"num_bytes": 253785, "checksum": "1ac723e5500a33b5803bc004217d97ea1b37c345666e9080d5a1926b4d2e2ef3"}}, "download_size": 4934211, "post_processing_size": null, "dataset_size": 4924233, "size_in_bytes": 9858444}, "newsqa.random-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "newsqa.random-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4422767, "num_examples": 586, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 513622, "num_examples": 66, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=11mraicFI6bb6KFcUl3unZxW0OSGg18UB": {"num_bytes": 4431727, "checksum": "dd3959b6f0d73c034d159bc6abd58bddd3eccd8262742c172662ff9f676725cb"}, "https://drive.google.com/uc?id=10rM5-BYr1mrSVSRFqgiFuQdYvKuCDzD1": {"num_bytes": 514773, "checksum": "5b99daa84e3a1cd33a27bae3b419961183286e981df2f96b150096388508a3ee"}}, "download_size": 4946500, "post_processing_size": null, "dataset_size": 4936389, "size_in_bytes": 9882889}, "paq.original-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "int64", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "paq.original-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1009499834, "num_examples": 692645, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 112181249, "num_examples": 76961, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=10PrKt6kEgq07SNss5rZizRcqQxcxS84X": {"num_bytes": 1002956199, "checksum": "bc5aa81e12bb689b47442ba65ad8be41f99b8aed6a88a1cdf7addd11d3ec652a"}, "https://drive.google.com/uc?id=1-kuu0RihKcve-EGFtwjYz8jOdN6rXbcM": {"num_bytes": 111450221, "checksum": "3be6746245b3479a3a4e0f1144ce5bfb09e5dad1976e9996e9a22ba38cf11955"}}, "download_size": 1114406420, "post_processing_size": null, "dataset_size": 1121681083, "size_in_bytes": 2236087503}, "paq.random-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "int64", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "paq.random-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1009499834, "num_examples": 692645, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 112181249, "num_examples": 76961, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=11W8G2mmQ78LBwet5GFvwvNu5hfPWOblN": {"num_bytes": 1002956199, "checksum": "bc5aa81e12bb689b47442ba65ad8be41f99b8aed6a88a1cdf7addd11d3ec652a"}, "https://drive.google.com/uc?id=10iItHXCfQ9wIsFmDUIMjdXbOSapRCAwj": {"num_bytes": 111450221, "checksum": "3be6746245b3479a3a4e0f1144ce5bfb09e5dad1976e9996e9a22ba38cf11955"}}, "download_size": 1114406420, "post_processing_size": null, "dataset_size": 1121681083, "size_in_bytes": 2236087503}, "searchqa.original-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "int64", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "searchqa.original-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 31403453, "num_examples": 663, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 5578556, "num_examples": 117, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=1-AOYSVQf4GI7UnXcZ2EYbDXszHetjdrS": {"num_bytes": 31416040, "checksum": "f615faee688b858cf08a30b31032a64b8df7ff3cca042b0b3bbbefdbd35fb1de"}, "https://drive.google.com/uc?id=1-ZCCflByWZ3sBE_Hxirhfy9KQQ8d2ABN": {"num_bytes": 5581361, "checksum": "804944d505f0940060703b75b103321f8338cddb7bb0c782151cdede1d4896d8"}}, "download_size": 36997401, "post_processing_size": null, "dataset_size": 36982009, "size_in_bytes": 73979410}, "searchqa.random-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "int64", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "searchqa.random-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 33071979, "num_examples": 702, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 3910030, "num_examples": 78, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=11ivqoK_aVjK6RpaT_QWjqJ3V9VMvw190": {"num_bytes": 33085689, "checksum": "d3a679a077bcc12c7e1f12f3504d4a5ec194d14ba20ec37c4db068ea536f6192"}, "https://drive.google.com/uc?id=11Uvh0s17N7hvwfPF75x0Ko6xdccfsZcl": {"num_bytes": 3911712, "checksum": "1cdc105f2d926210e70df5dadcf6457925cee057a9c06e13142cc3ef0d4b3203"}}, "download_size": 36997401, "post_processing_size": null, "dataset_size": 36982009, "size_in_bytes": 73979410}, "squad2.original-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "squad2.original-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5461054, "num_examples": 2957, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 549784, "num_examples": 252, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=10Akh_VNH5Kvp0BiKbq9irA-u5zgdoxPy": {"num_bytes": 5421114, "checksum": "8369ea80f63e550153051e63173ba0ecc5a9409e02e5b06839af483620191633"}, "https://drive.google.com/uc?id=10QszRRFigIz_bAWuhOkn3r3dngHLpDSy": {"num_bytes": 546425, "checksum": "ae2fa26d97f826c8496765c06767a71d5141c47860b2fc9f9b6df70cd288c807"}}, "download_size": 5967539, "post_processing_size": null, "dataset_size": 6010838, "size_in_bytes": 11978377}, "squad2.random-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "squad2.random-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5374088, "num_examples": 2888, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 589922, "num_examples": 321, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=110xR1sye2Qx-QrrOPb3IDLhkjLCqG1Zy": {"num_bytes": 5335047, "checksum": "f0f9202725d14c2ad867f0e7767112e8e7f059ece2de0b5fbeaae8dc5f9ff804"}, "https://drive.google.com/uc?id=1144-Zt5-b8nFZOgUXbnk7l-RLSHTnJmN": {"num_bytes": 585542, "checksum": "b4ed07a2def1a3ea6d482b52f5815701a5651f41cdc0a0306b08e2ec5bac58ad"}}, "download_size": 5920589, "post_processing_size": null, "dataset_size": 5964010, "size_in_bytes": 11884599}, "triviaqa.original-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "triviaqa.original-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 11278937, "num_examples": 637, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 1229922, "num_examples": 66, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=1-lb4JylW7olUzbLJBaBqNK-HAcxulxEI": {"num_bytes": 11279573, "checksum": "8ebce60165aabfee8b3faab4999da9e06c615d1d46e10c4f5368069b11ecbc02"}, "https://drive.google.com/uc?id=1-T0LHqgSvKyIx6YehQ-TrBOhJqygV-Si": {"num_bytes": 1230213, "checksum": "1fd4524f83484d90275a945ecbcacbcca089db0d79bb534df1232b7ac3d5f70e"}}, "download_size": 12509786, "post_processing_size": null, "dataset_size": 12508859, "size_in_bytes": 25018645}, "triviaqa.random-split": {"description": "Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Bl\u00fcbaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_processed": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_processed": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_processed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "causalqa", "config_name": "triviaqa.random-split", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 11358699, "num_examples": 632, "dataset_name": "causalqa"}, "validation": {"name": "validation", "num_bytes": 1268034, "num_examples": 71, "dataset_name": "causalqa"}}, "download_checksums": {"https://drive.google.com/uc?id=11-y9PSfAAP8-L8PtBd_51RaA-5MjsvPH": {"num_bytes": 11359562, "checksum": "27067cb12e15f7177c83bf5eebe666890d50bd24c629101a3e89fba24576c023"}, "https://drive.google.com/uc?id=11WzPoeBLWbfMyR8xozfl-xMOSqevIIrs": {"num_bytes": 1268294, "checksum": "bae89fd42b101dca27b03dc354f4c34ed78fbccd7e2640f2e65d4e01fd0f16cd"}}, "download_size": 12627856, "post_processing_size": null, "dataset_size": 12626733, "size_in_bytes": 25254589}}
 
 
source/dataset_description.txt DELETED
@@ -1 +0,0 @@
1
- Causal Question Answering Dataset is machine reading comprehension dataset from 10 QA datasets that are filtered using regex to get causal question. The dataset is from a paper titled CausalQA: A Benchmark for Causal Question Answering. 2022. Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Blübaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast. In COLING.
 
 
source/dataset_info.json DELETED
@@ -1,225 +0,0 @@
1
- {
2
- "homepage": "https://github.com/jakartaresearch",
3
- "files": [
4
- {
5
- "eli5": {
6
- "original-split": {
7
- "description": "",
8
- "version": "1.0.0",
9
- "citation": "",
10
- "url_data": {
11
- "train": "https://drive.google.com/uc?id=1-FKsZknoDE7bh0fKucs8nNu72IKFxfuo",
12
- "val": "https://drive.google.com/uc?id=108bG1CJMaqANIqLvxthuQvsZro-5qwbX"
13
- }
14
- },
15
- "random-split": {
16
- "description": "",
17
- "version": "1.0.0",
18
- "citation": "",
19
- "url_data": {
20
- "train": "https://drive.google.com/uc?id=11FfVASpa-Agie4AJfEPdjkR-QrvEO5Sd",
21
- "val": "https://drive.google.com/uc?id=11XlFn79xGSGmPu-TU2SYariQgg2bby2h"
22
- }
23
- }
24
- }
25
- },
26
- {
27
- "gooaq": {
28
- "original-split": {
29
- "description": "",
30
- "version": "1.0.0",
31
- "citation": "",
32
- "url_data": {
33
- "train": "https://drive.google.com/uc?id=10A8rPzMECn3s-qXSZRGKnlDMx5h6-U6k",
34
- "val": "https://drive.google.com/uc?id=10PNaYwLxFBfjm2AMM65sp-kRs9rE-8_A"
35
- }
36
- },
37
- "random-split": {
38
- "description": "",
39
- "version": "1.0.0",
40
- "citation": "",
41
- "url_data": {
42
- "train": "https://drive.google.com/uc?id=10cZ6Kx7v_UHzHkvBgduoiJ7Ny3aeO-Ww",
43
- "val": "https://drive.google.com/uc?id=11YdwCGTaw7jKk612tRc2T6NSYRUgmAjk"
44
- }
45
- }
46
- }
47
- },
48
- {
49
- "hotpotqa": {
50
- "original-split": {
51
- "description": "",
52
- "version": "1.0.0",
53
- "citation": "",
54
- "url_data": {
55
- "train": "https://drive.google.com/uc?id=103bDQK53aT1wIFbJqe9B8Usi0_TCROBT",
56
- "val": "https://drive.google.com/uc?id=1-ZSJNCjxdh5wEifkVVseieYYu45tVbEN"
57
- }
58
- },
59
- "random-split": {
60
- "description": "",
61
- "version": "1.0.0",
62
- "citation": "",
63
- "url_data": {
64
- "train": "https://drive.google.com/uc?id=11GYpcyT98XTomZhQxai5OLrTFoTGrfJJ",
65
- "val": "https://drive.google.com/uc?id=11B-cH_N8VIyLFCyM_l4dWfkOURXf4ky-"
66
- }
67
- }
68
- }
69
- },
70
- {
71
- "msmarco": {
72
- "original-split": {
73
- "description": "",
74
- "version": "1.0.0",
75
- "citation": "",
76
- "url_data": {
77
- "train": "https://drive.google.com/uc?id=1-jMNHG6rS9b6TnRZ6iNbjRLXRwr8Znb_",
78
- "val": "https://drive.google.com/uc?id=1-BtYcEWwgaD0aI5hHCHFXCZOZu8I2e8o"
79
- }
80
- },
81
- "random-split": {
82
- "description": "",
83
- "version": "1.0.0",
84
- "citation": "",
85
- "url_data": {
86
- "train": "https://drive.google.com/uc?id=111RXRWuNk2CMhDbzY2gIudmcB70lsQSy",
87
- "val": "https://drive.google.com/uc?id=11OLGeEkS6Wkv3q5ObNAinim5hqB5UU4D"
88
- }
89
- }
90
- }
91
- },
92
- {
93
- "naturalquestions": {
94
- "original-split": {
95
- "description": "",
96
- "version": "1.0.0",
97
- "citation": "",
98
- "url_data": {
99
- "train": "https://drive.google.com/uc?id=1-M_G-V7p0XvGmw3JWtOSfeAlrGLMu8U3",
100
- "val": "https://drive.google.com/uc?id=1-hjnE4TvEp76eznP14DHIsqYnitY0rNW"
101
- }
102
- },
103
- "random-split": {
104
- "description": "",
105
- "version": "1.0.0",
106
- "citation": "",
107
- "url_data": {
108
- "train": "https://drive.google.com/uc?id=118ALG23_Ayi7qrAAdExJKLiN21Xy7VO5",
109
- "val": "https://drive.google.com/uc?id=11L8JW9llwDI-vg2LSXL4NmnHbOOHETBr"
110
- }
111
- }
112
- }
113
- },
114
- {
115
- "newsqa": {
116
- "original-split": {
117
- "description": "",
118
- "version": "1.0.0",
119
- "citation": "",
120
- "url_data": {
121
- "train": "https://drive.google.com/uc?id=1-hvqrR98PcajkorCvFvqz4fcuinZfgvr",
122
- "val": "https://drive.google.com/uc?id=1-oZbc9QFvuDfxzDhwotOYwvcLfFbCJQC"
123
- }
124
- },
125
- "random-split": {
126
- "description": "",
127
- "version": "1.0.0",
128
- "citation": "",
129
- "url_data": {
130
- "train": "https://drive.google.com/uc?id=11mraicFI6bb6KFcUl3unZxW0OSGg18UB",
131
- "val": "https://drive.google.com/uc?id=10rM5-BYr1mrSVSRFqgiFuQdYvKuCDzD1"
132
- }
133
- }
134
- }
135
- },
136
- {
137
- "paq": {
138
- "original-split": {
139
- "description": "",
140
- "version": "1.0.0",
141
- "citation": "",
142
- "url_data": {
143
- "train": "https://drive.google.com/uc?id=10PrKt6kEgq07SNss5rZizRcqQxcxS84X",
144
- "val": "https://drive.google.com/uc?id=1-kuu0RihKcve-EGFtwjYz8jOdN6rXbcM"
145
- }
146
- },
147
- "random-split": {
148
- "description": "",
149
- "version": "1.0.0",
150
- "citation": "",
151
- "url_data": {
152
- "train": "https://drive.google.com/uc?id=11W8G2mmQ78LBwet5GFvwvNu5hfPWOblN",
153
- "val": "https://drive.google.com/uc?id=10iItHXCfQ9wIsFmDUIMjdXbOSapRCAwj"
154
- }
155
- }
156
- }
157
- },
158
- {
159
- "searchqa": {
160
- "original-split": {
161
- "description": "",
162
- "version": "1.0.0",
163
- "citation": "",
164
- "url_data": {
165
- "train": "https://drive.google.com/uc?id=1-AOYSVQf4GI7UnXcZ2EYbDXszHetjdrS",
166
- "val": "https://drive.google.com/uc?id=1-ZCCflByWZ3sBE_Hxirhfy9KQQ8d2ABN"
167
- }
168
- },
169
- "random-split": {
170
- "description": "",
171
- "version": "1.0.0",
172
- "citation": "",
173
- "url_data": {
174
- "train": "https://drive.google.com/uc?id=11ivqoK_aVjK6RpaT_QWjqJ3V9VMvw190",
175
- "val": "https://drive.google.com/uc?id=11Uvh0s17N7hvwfPF75x0Ko6xdccfsZcl"
176
- }
177
- }
178
- }
179
- },
180
- {
181
- "squad2": {
182
- "original-split": {
183
- "description": "",
184
- "version": "1.0.0",
185
- "citation": "",
186
- "url_data": {
187
- "train": "https://drive.google.com/uc?id=10Akh_VNH5Kvp0BiKbq9irA-u5zgdoxPy",
188
- "val": "https://drive.google.com/uc?id=10QszRRFigIz_bAWuhOkn3r3dngHLpDSy"
189
- }
190
- },
191
- "random-split": {
192
- "description": "",
193
- "version": "1.0.0",
194
- "citation": "",
195
- "url_data": {
196
- "train": "https://drive.google.com/uc?id=110xR1sye2Qx-QrrOPb3IDLhkjLCqG1Zy",
197
- "val": "https://drive.google.com/uc?id=1144-Zt5-b8nFZOgUXbnk7l-RLSHTnJmN"
198
- }
199
- }
200
- }
201
- },
202
- {
203
- "triviaqa": {
204
- "original-split": {
205
- "description": "",
206
- "version": "1.0.0",
207
- "citation": "",
208
- "url_data": {
209
- "train": "https://drive.google.com/uc?id=1-lb4JylW7olUzbLJBaBqNK-HAcxulxEI",
210
- "val": "https://drive.google.com/uc?id=1-T0LHqgSvKyIx6YehQ-TrBOhJqygV-Si"
211
- }
212
- },
213
- "random-split": {
214
- "description": "",
215
- "version": "1.0.0",
216
- "citation": "",
217
- "url_data": {
218
- "train": "https://drive.google.com/uc?id=11-y9PSfAAP8-L8PtBd_51RaA-5MjsvPH",
219
- "val": "https://drive.google.com/uc?id=11WzPoeBLWbfMyR8xozfl-xMOSqevIIrs"
220
- }
221
- }
222
- }
223
- }
224
- ]
225
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
source/features_metadata.yaml DELETED
@@ -1,158 +0,0 @@
1
- eli5:
2
- original-split:
3
- id: string
4
- random-split:
5
- id: string
6
- gooaq:
7
- original-split:
8
- id: int64
9
- question: string
10
- question_processed: string
11
- context: string
12
- context_processed: string
13
- answer: string
14
- answer_processed: string
15
- random-split:
16
- id: int64
17
- question: string
18
- question_processed: string
19
- context: string
20
- context_processed: string
21
- answer: string
22
- answer_processed: string
23
- hotpotqa:
24
- original-split:
25
- id: string
26
- question: string
27
- question_processed: string
28
- context: string
29
- context_processed: string
30
- answer: string
31
- answer_processed: string
32
- random-split:
33
- id: string
34
- question: string
35
- question_processed: string
36
- context: string
37
- context_processed: string
38
- answer: string
39
- answer_processed: string
40
- msmarco:
41
- original-split:
42
- id: int64
43
- question: string
44
- question_processed: string
45
- context: string
46
- context_processed: string
47
- answer: string
48
- answer_processed: string
49
- random-split:
50
- id: int64
51
- question: string
52
- question_processed: string
53
- context: string
54
- context_processed: string
55
- answer: string
56
- answer_processed: string
57
- naturalquestions:
58
- original-split:
59
- id: int64
60
- question: string
61
- question_processed: string
62
- context: string
63
- context_processed: string
64
- answer: string
65
- answer_processed: string
66
- random-split:
67
- id: int64
68
- question: string
69
- question_processed: string
70
- context: string
71
- context_processed: string
72
- answer: string
73
- answer_processed: string
74
- newsqa:
75
- original-split:
76
- id: string
77
- question: string
78
- question_processed: string
79
- context: string
80
- context_processed: string
81
- answer: string
82
- answer_processed: string
83
- random-split:
84
- id: string
85
- question: string
86
- question_processed: string
87
- context: string
88
- context_processed: string
89
- answer: string
90
- answer_processed: string
91
- paq:
92
- original-split:
93
- id: int64
94
- question: string
95
- question_processed: string
96
- context: string
97
- context_processed: string
98
- answer: string
99
- answer_processed: string
100
- random-split:
101
- id: int64
102
- question: string
103
- question_processed: string
104
- context: string
105
- context_processed: string
106
- answer: string
107
- answer_processed: string
108
- searchqa:
109
- original-split:
110
- id: int64
111
- question: string
112
- question_processed: string
113
- context: string
114
- context_processed: string
115
- answer: string
116
- answer_processed: string
117
- random-split:
118
- id: int64
119
- question: string
120
- question_processed: string
121
- context: string
122
- context_processed: string
123
- answer: string
124
- answer_processed: string
125
- squad2:
126
- original-split:
127
- id: string
128
- question: string
129
- question_processed: string
130
- context: string
131
- context_processed: string
132
- answer: string
133
- answer_processed: string
134
- random-split:
135
- id: string
136
- question: string
137
- question_processed: string
138
- context: string
139
- context_processed: string
140
- answer: string
141
- answer_processed: string
142
- triviaqa:
143
- original-split:
144
- id: string
145
- question: string
146
- question_processed: string
147
- context: string
148
- context_processed: string
149
- answer: string
150
- answer_processed: string
151
- random-split:
152
- id: string
153
- question: string
154
- question_processed: string
155
- context: string
156
- context_processed: string
157
- answer: string
158
- answer_processed: string
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
triviaqa.random-split/causalqa-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17e4de7a3658f675fd9c744cbd2d3a0fd826a101df8be54e96a72e6d4faca420
3
+ size 6042010
triviaqa.random-split/causalqa-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b27a598821708bc92359edf8cc62df9f649d0d99db7446734f2c98045cfb670
3
+ size 702912