Datasets:

Languages:
English
ArXiv:
Rricha commited on
Commit
888f2c7
1 Parent(s): f885e98

Update climate-evaluation.py

Browse files
Files changed (1) hide show
  1. climate-evaluation.py +7 -7
climate-evaluation.py CHANGED
@@ -89,7 +89,7 @@ class ClimateEvaluation(datasets.GeneratorBasedBuilder):
89
  text_features={"text": "text"},
90
  label_classes=[0, 1, 2],
91
  label_column="label",
92
- data_dir="climate-evaluation/ClimateStance",
93
  citation=textwrap.dedent(
94
  """\
95
  @inproceedings{vaid-etal-2022-towards,
@@ -120,7 +120,7 @@ class ClimateEvaluation(datasets.GeneratorBasedBuilder):
120
  text_features={"text": "text"},
121
  label_classes=["0", "1", "2", "3", "4"],
122
  label_column="label",
123
- data_dir="climate-evaluation/ClimateEng",
124
  citation=textwrap.dedent(
125
  """\
126
  @inproceedings{vaid-etal-2022-towards,
@@ -151,7 +151,7 @@ class ClimateEvaluation(datasets.GeneratorBasedBuilder):
151
  text_features={"text": "sentence"},
152
  label_classes=["0", "1"],
153
  label_column="label",
154
- data_dir="climate-evaluation/ClimaText",
155
  citation=textwrap.dedent(
156
  """\
157
  @misc{varini2021climatext,
@@ -172,7 +172,7 @@ class ClimateEvaluation(datasets.GeneratorBasedBuilder):
172
  """\
173
  CDP-QA is a dataset compiled from the questionnaires of the Carbon Disclosure Project, where cities, corporations, and states disclose their environmental information. The dataset presents pairs of questions and answers, and the objective is to predict whether a given answer is valid for the corresponding question. We benchmarked ClimateGPT on the questionnaires from the Combined split. """
174
  ),
175
- data_dir="climate-evaluation/CDP",
176
  text_features={"question": "question", "answer": "answer"},
177
  label_classes=["0", "1"],
178
  label_column="label",
@@ -196,7 +196,7 @@ class ClimateEvaluation(datasets.GeneratorBasedBuilder):
196
  """\
197
  The Exeter Climate Claims dataset contains textual data from 33 influential climate contrarian blogs and the climate-change-related content from 20 conservative think tanks spanning the years 1998 to 2020. Annotation of the dataset was done manually using a thorough three-layer taxonomy of (climate-change related) contrarian claims, which was developed by the authors. We utilize this dataset specifically for the binary classification task of discerning whether a given text contains a contrarian claim pertaining to climate change or not.  """
198
  ),
199
- data_dir="climate-evaluation/exeter",
200
  text_features={"text": "text"},
201
  label_classes=["0", "1"],
202
  label_column="label",
@@ -223,7 +223,7 @@ class ClimateEvaluation(datasets.GeneratorBasedBuilder):
223
  EXAMS is a multiple choice question answering collected from high school examinations. To evaluate ClimateGPT on the cascaded machine translation approach, we evaluate on the English translation of the Arabic subset of this dataset. The Arabic subset covers questions from biology, physics, science, social science and Islamic studies.
224
  """
225
  ),
226
- data_dir="climate-evaluation/exams/translated",
227
  text_features={"subject": "subject", "question_stem": "question_stem", "choices": "choices"},
228
  label_classes=["A", "B", "C", "D"],
229
  label_column="answerKey",
@@ -261,7 +261,7 @@ class ClimateEvaluation(datasets.GeneratorBasedBuilder):
261
  EXAMS is a multiple choice question answering collected from high school examinations. To evaluate ClimateGPT on the cascaded machine translation approach, we evaluate on the Arabic subset of this dataset. The Arabic subset covers questions from biology, physics, science, social science and Islamic studies. Note, this dataset is in arabic.
262
  """
263
  ),
264
- data_dir="climate-evaluation/exams/",
265
  text_features={"subject": "subject", "question_stem": "question_stem", "choices": "choices"},
266
  label_classes=["A", "B", "C", "D"],
267
  label_column="answerKey",
 
89
  text_features={"text": "text"},
90
  label_classes=[0, 1, 2],
91
  label_column="label",
92
+ data_dir="ClimateStance",
93
  citation=textwrap.dedent(
94
  """\
95
  @inproceedings{vaid-etal-2022-towards,
 
120
  text_features={"text": "text"},
121
  label_classes=["0", "1", "2", "3", "4"],
122
  label_column="label",
123
+ data_dir="ClimateEng",
124
  citation=textwrap.dedent(
125
  """\
126
  @inproceedings{vaid-etal-2022-towards,
 
151
  text_features={"text": "sentence"},
152
  label_classes=["0", "1"],
153
  label_column="label",
154
+ data_dir="ClimaText",
155
  citation=textwrap.dedent(
156
  """\
157
  @misc{varini2021climatext,
 
172
  """\
173
  CDP-QA is a dataset compiled from the questionnaires of the Carbon Disclosure Project, where cities, corporations, and states disclose their environmental information. The dataset presents pairs of questions and answers, and the objective is to predict whether a given answer is valid for the corresponding question. We benchmarked ClimateGPT on the questionnaires from the Combined split. """
174
  ),
175
+ data_dir="CDP",
176
  text_features={"question": "question", "answer": "answer"},
177
  label_classes=["0", "1"],
178
  label_column="label",
 
196
  """\
197
  The Exeter Climate Claims dataset contains textual data from 33 influential climate contrarian blogs and the climate-change-related content from 20 conservative think tanks spanning the years 1998 to 2020. Annotation of the dataset was done manually using a thorough three-layer taxonomy of (climate-change related) contrarian claims, which was developed by the authors. We utilize this dataset specifically for the binary classification task of discerning whether a given text contains a contrarian claim pertaining to climate change or not.  """
198
  ),
199
+ data_dir="exeter",
200
  text_features={"text": "text"},
201
  label_classes=["0", "1"],
202
  label_column="label",
 
223
  EXAMS is a multiple choice question answering collected from high school examinations. To evaluate ClimateGPT on the cascaded machine translation approach, we evaluate on the English translation of the Arabic subset of this dataset. The Arabic subset covers questions from biology, physics, science, social science and Islamic studies.
224
  """
225
  ),
226
+ data_dir="exams/translated",
227
  text_features={"subject": "subject", "question_stem": "question_stem", "choices": "choices"},
228
  label_classes=["A", "B", "C", "D"],
229
  label_column="answerKey",
 
261
  EXAMS is a multiple choice question answering collected from high school examinations. To evaluate ClimateGPT on the cascaded machine translation approach, we evaluate on the Arabic subset of this dataset. The Arabic subset covers questions from biology, physics, science, social science and Islamic studies. Note, this dataset is in arabic.
262
  """
263
  ),
264
+ data_dir="exams/",
265
  text_features={"subject": "subject", "question_stem": "question_stem", "choices": "choices"},
266
  label_classes=["A", "B", "C", "D"],
267
  label_column="answerKey",