updated dataset card

#1
by dcisek93 - opened
Files changed (1) hide show
  1. README.md +46 -0
README.md CHANGED
@@ -41,4 +41,50 @@ size_categories:
41
  ---
42
  # Dataset Card for "climate_fever_fixed"
43
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
41
  ---
42
  # Dataset Card for "climate_fever_fixed"
43
 
44
+ ### Dataset Summary
45
+
46
+ This dataset was created to aid our team in developing a model to more accurately perform climate change-related fact checking. We approach this task from a perspective heavily impacted
47
+ by the work of the [ClimateBERT](https://climatebert.ai/about) team. With that in mind, our team likewise leveraged a BERT Language model to solve this task. This dataset presents an
48
+ edited version of the [Climate_Fever](https://huggingface.co/datasets/climate_fever) dataset, hosted by HuggingFace. Climate_Fever is composed of climate-related documents
49
+ that have been annotated with labels related to fact-checking and misinformation. However, in the climate-plus project, we decided to modify the dataset to remove redundancy
50
+ and keep only the essentials of a text-entailment problem: claim as the premise and evidence as the hypothesis.
51
+
52
+
53
+
54
+ ### Data Fields
55
+
56
+ This dataset contains 7675 records, each of which is composed of several attributes:
57
+
58
+ - `claim_id`: a `integer` feature, which serves as a unique identifier for each record/row.
59
+ - `claim`: a `string` feature, containes the raw text of a given climate-related claim.
60
+ - `evidence`: a `string` feature, which provides free text evidence that relates to the previously established claim.
61
+ - `label`: a `class label` feature representing an assigned class, where values can either be 0: "supports", 1: "refutes" and 2: "not enough info".
62
+ - `category`: a `string` feature, which provides additional detail about the particular focus of a given claim.
63
+
64
+ <br>
65
+ This dataset was then broken into train, test and validation sets to enable proper evaluation of our model. These splits contain the following amount of data:
66
+
67
+ - `Train`: 4300 Records
68
+ - `Test`: 1540 Records
69
+ - `Val`: 1840 Records
70
+
71
+ ### Source Data
72
+
73
+ This dataset represents an evolved version of the original [Climate_Fever](https://huggingface.co/datasets/climate_fever) dataset, hosted by HuggingFace. It was adapted to meet
74
+ the needs of our team, as we attempted to solve a specific climate change-related task. The original dataset adopted the FEVER methodology, discussed in more detail [here](https://www.amazon.science/blog/the-fever-data-set-what-doesnt-kill-it-will-make-it-stronger).
75
+ Their original dataset consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence
76
+ sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs.
77
+
78
+
79
+ ### Methodology
80
+
81
+ This dataset was curated by our team to reduce redundancy and keep only the essentials of a text-entailment problem: claim as the premise and evidence as the hypothesis.
82
+ For each given claim, there are multiple sentences of evidence. We decided to expand the one-to-many relation to one-to-one.
83
+ This resulted in a modified version of the climate_fever dataset that includes only one evidence sentence per claim.
84
+
85
+
86
+ ### Languages
87
+
88
+ The text contained in the dataset is entirely in English, as found in the real-world financial disclosures identified by the TCFD. The associated BCP-47 code is [`en`](https://www.techonthenet.com/js/language_tags.php), to ensure clear labeling of language usage for downstream tasks and other future applications.
89
+
90
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)