Add dataset derivation info to dataset card.
Browse files
README.md
CHANGED
@@ -14,8 +14,11 @@ license: unknown
|
|
14 |
|
15 |
### Dataset Summary
|
16 |
|
17 |
-
The CLIMATE-FEVER dataset modified to
|
18 |
-
|
|
|
|
|
|
|
19 |
|
20 |
### Supported Tasks and Leaderboards
|
21 |
|
@@ -43,23 +46,52 @@ The CLIMATE-FEVER dataset modified to a format that SentenceBERT training script
|
|
43 |
|
44 |
### Curation Rationale
|
45 |
|
46 |
-
|
|
|
|
|
|
|
|
|
47 |
|
48 |
### Source Data
|
49 |
|
50 |
#### Initial Data Collection and Normalization
|
51 |
|
52 |
-
|
53 |
|
54 |
#### Who are the source language producers?
|
55 |
|
56 |
-
|
57 |
|
58 |
### Annotations
|
59 |
|
60 |
#### Annotation process
|
61 |
|
62 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
63 |
|
64 |
#### Who are the annotators?
|
65 |
|
|
|
14 |
|
15 |
### Dataset Summary
|
16 |
|
17 |
+
The CLIMATE-FEVER dataset modified to supply NLI-style (cf-nli) features or STSb-style (cf-stsb) features that SentenceBERT training scripts can use as drop-in replacements for AllNLI and/or STSb datasets.
|
18 |
+
|
19 |
+
There are two cf-nli datasets: one derived from only SUPPORTS and REFUTES evidence (cf-nli), and one that also derived data from NOT_ENOUGH_INFO evidence based on the annotator votes (cf-nli-nei).
|
20 |
+
|
21 |
+
The feature style is specified as a named configuration when loading the dataset: cf-nli, cf-nli-nei, or cf-stsb. See usage notes below for `load_dataset` examples.
|
22 |
|
23 |
### Supported Tasks and Leaderboards
|
24 |
|
|
|
46 |
|
47 |
### Curation Rationale
|
48 |
|
49 |
+
SentenceBERT models are designed for 'Domain Adaptation' and/or 'Fine-tuning' using labeled data in the downstream task domain. As a bi-encoder, the primary objective function is real-valued similarity scoring. Typical training datasets use NLI-style features as input, and STSb-style features as model evaluation during training, and to measure post-hoc, _intrinsic_ STSb performance. Classification tasks typically use a classifier network that accepts SentenceBERT encodings as input, and are trained on class-labeled datasets.
|
50 |
+
|
51 |
+
So, to fine-tune a SentenceBERT model in a climate-change domain, a labeled climate change dataset would be ideal. Much like the authors of the CLIMATE-FEVER dataset, we knew of no other _labeled_ datasets specific to climate change. And while CLIMATE-FEVER is suitably labeled for classification tasks, it is not ready for similarity tuning in the style of SentenceBERT.
|
52 |
+
|
53 |
+
This modified CLIMATE-FEVER dataset attempts to fill that gap by deriving NLI-style features typically used in pre-training and fine-tuning a SentenceBERT model - the 'train' split. SentenceBERT also uses STSb-style features to evaluate model performance both during training - the 'dev' or 'validation' split - and post-hoc to gauge _intrinsic_ model performance on STSb - the 'test' split.
|
54 |
|
55 |
### Source Data
|
56 |
|
57 |
#### Initial Data Collection and Normalization
|
58 |
|
59 |
+
see CLIMATE-FEVER
|
60 |
|
61 |
#### Who are the source language producers?
|
62 |
|
63 |
+
see CLIMATE-FEVER
|
64 |
|
65 |
### Annotations
|
66 |
|
67 |
#### Annotation process
|
68 |
|
69 |
+
##### NLI Derivation
|
70 |
+
|
71 |
+
**cf-nli**
|
72 |
+
For each Claim that has both SUPPORTS evidence and REFUTES evidence, creates the following labeled pairs in the style of NLI dataset:
|
73 |
+
|
74 |
+
_NLI Fields_
|
75 |
+
| split | dataset | sentence1 | sentence2 | label |
|
76 |
+
|---|---|---|---|---|
|
77 |
+
| {'train', 'test'} | 'climate-fever' | claim | evidence | evidence_label SUPPORTS -> 'entailment', REFUTES -> 'contradiction' |
|
78 |
+
|
79 |
+
> Note that by defintion, only claims classified as DISPUTED include both SUPPORTS and REFUTES evidence, so this dataset is limited to a small subset of CLIMATE-FEVER
|
80 |
+
|
81 |
+
**cf-nli-nei**
|
82 |
+
This dataset uses the annotator 'votes' list to cast a NOT_ENOUGH_INFO (NEI) evidence to a SUPPORTS or REFUTES evidence. By doing so, Claims in the SUPPORTS, REFUTES, and NEI classes can be used to generate additional sentence pairs.
|
83 |
+
|
84 |
+
_Casting NEI Evidence to SUPPORTS or REFUTES_
|
85 |
+
| votes | effective evidence_label |
|
86 |
+
|---|---|
|
87 |
+
| SUPPORTS > REFUTES | _SUPPORTS_ |
|
88 |
+
| SUPPORTS < REFUTES | _REFUTES_ |
|
89 |
+
|
90 |
+
Any Claims that have *_at least one_* SUPPORTS or REFUTES evidence, and NEI evidences that can be cast to effective _SUPPORTS_ or _REFUTES_ are then included in the datasset.
|
91 |
+
|
92 |
+
##### STSb Derivation
|
93 |
+
|
94 |
+
TBD
|
95 |
|
96 |
#### Who are the annotators?
|
97 |
|