Correct descriptions on dataset card.
Browse files
README.md
CHANGED
@@ -14,9 +14,9 @@ license: unknown
|
|
14 |
|
15 |
### Dataset Summary
|
16 |
|
17 |
-
The CLIMATE-FEVER dataset modified to supply NLI-style (cf-nli) features or STSb-style (cf-stsb) features that SentenceBERT training scripts can use as drop-in replacements for AllNLI and/or STSb datasets.
|
18 |
|
19 |
-
There are two cf-nli datasets: one derived from only SUPPORTS and REFUTES evidence (cf-nli), and one that also derived data from NOT_ENOUGH_INFO evidence based on the annotator votes (cf-nli-nei).
|
20 |
|
21 |
The feature style is specified as a named configuration when loading the dataset: cf-nli, cf-nli-nei, or cf-stsb. See usage notes below for `load_dataset` examples.
|
22 |
|
@@ -46,11 +46,11 @@ The feature style is specified as a named configuration when loading the dataset
|
|
46 |
|
47 |
### Curation Rationale
|
48 |
|
49 |
-
SentenceBERT models are designed for 'Domain Adaptation' and/or 'Fine-tuning' using labeled data in the downstream task domain. As a bi-encoder, the primary objective function is real-valued similarity scoring. Typical training datasets use NLI-style features as input, and STSb-style features as model evaluation during training, and to measure post-hoc, _intrinsic_ STSb performance. Classification tasks typically use a classifier network that accepts SentenceBERT encodings as input, and
|
50 |
|
51 |
-
So, to fine-tune a SentenceBERT model in a climate-change domain, a labeled climate change dataset would be ideal. Much like the authors of the CLIMATE-FEVER dataset, we
|
52 |
|
53 |
-
This modified CLIMATE-FEVER dataset attempts to fill that gap by deriving NLI-style features typically used in pre-training and fine-tuning a SentenceBERT model
|
54 |
|
55 |
### Source Data
|
56 |
|
@@ -69,17 +69,19 @@ see CLIMATE-FEVER
|
|
69 |
##### NLI Derivation
|
70 |
|
71 |
**cf-nli**
|
72 |
-
|
|
|
73 |
|
74 |
_NLI Fields_
|
75 |
| split | dataset | sentence1 | sentence2 | label |
|
76 |
|---|---|---|---|---|
|
77 |
| {'train', 'test'} | 'climate-fever' | claim | evidence | evidence_label SUPPORTS -> 'entailment', REFUTES -> 'contradiction' |
|
78 |
|
79 |
-
> Note that by defintion, only claims classified as DISPUTED include both SUPPORTS and REFUTES evidence, so this dataset is limited to a small subset of CLIMATE-FEVER
|
80 |
|
81 |
**cf-nli-nei**
|
82 |
-
|
|
|
83 |
|
84 |
_Casting NEI Evidence to SUPPORTS or REFUTES_
|
85 |
| votes | effective evidence_label |
|
@@ -87,7 +89,12 @@ _Casting NEI Evidence to SUPPORTS or REFUTES_
|
|
87 |
| SUPPORTS > REFUTES | _SUPPORTS_ |
|
88 |
| SUPPORTS < REFUTES | _REFUTES_ |
|
89 |
|
90 |
-
Any Claims that have
|
|
|
|
|
|
|
|
|
|
|
91 |
|
92 |
##### STSb Derivation
|
93 |
|
|
|
14 |
|
15 |
### Dataset Summary
|
16 |
|
17 |
+
The CLIMATE-FEVER dataset modified to supply NLI-style (**cf-nli**) features or STSb-style (**cf-stsb**) features that SentenceBERT training scripts can use as drop-in replacements for AllNLI and/or STSb datasets.
|
18 |
|
19 |
+
There are two **cf-nli** datasets: one derived from only SUPPORTS and REFUTES evidence (**cf-nli**), and one that also derived data from NOT_ENOUGH_INFO evidence based on the annotator votes (**cf-nli-nei**).
|
20 |
|
21 |
The feature style is specified as a named configuration when loading the dataset: cf-nli, cf-nli-nei, or cf-stsb. See usage notes below for `load_dataset` examples.
|
22 |
|
|
|
46 |
|
47 |
### Curation Rationale
|
48 |
|
49 |
+
SentenceBERT models are designed for 'Domain Adaptation' and/or 'Fine-tuning' using labeled data in the downstream task domain. As a bi-encoder, the primary objective function is real-valued similarity scoring. Typical training datasets use NLI-style features as input, and STSb-style features as model evaluation during training, and to measure post-hoc, _intrinsic_ STSb performance. Classification tasks typically use a classifier network that accepts SentenceBERT encodings as input, and is trained on class-labeled datasets.
|
50 |
|
51 |
+
So, to fine-tune a SentenceBERT model in a climate-change domain, a labeled climate change dataset would be ideal. Much like the authors of the CLIMATE-FEVER dataset, we know of no other _labeled_ datasets specific to climate change. And while CLIMATE-FEVER is suitably labeled for classification tasks, it is not ready for similarity tuning in the style of SentenceBERT.
|
52 |
|
53 |
+
This modified CLIMATE-FEVER dataset attempts to fill that gap by deriving NLI-style features typically used in pre-training and fine-tuning a SentenceBERT model. SentenceBERT also uses STSb-style features to evaluate model performance both during training and after training to gauge _intrinsic_ model performance on STSb.
|
54 |
|
55 |
### Source Data
|
56 |
|
|
|
69 |
##### NLI Derivation
|
70 |
|
71 |
**cf-nli**
|
72 |
+
|
73 |
+
For each Claim that has both SUPPORTS evidence and REFUTES evidence, create the following labeled pairs in the style of NLI dataset:
|
74 |
|
75 |
_NLI Fields_
|
76 |
| split | dataset | sentence1 | sentence2 | label |
|
77 |
|---|---|---|---|---|
|
78 |
| {'train', 'test'} | 'climate-fever' | claim | evidence | evidence_label SUPPORTS -> 'entailment', REFUTES -> 'contradiction' |
|
79 |
|
80 |
+
> Note that by defintion, only claims classified as DISPUTED include both SUPPORTS and REFUTES evidence, so this dataset is limited to a small subset of CLIMATE-FEVER.
|
81 |
|
82 |
**cf-nli-nei**
|
83 |
+
|
84 |
+
This dataset uses the list of annotator 'votes' to cast a NOT_ENOUGH_INFO (NEI) evidence to a SUPPORTS or REFUTES evidence. By doing so, Claims in the SUPPORTS, REFUTES, and NEI classes can be used to generate additional sentence pairs.
|
85 |
|
86 |
_Casting NEI Evidence to SUPPORTS or REFUTES_
|
87 |
| votes | effective evidence_label |
|
|
|
89 |
| SUPPORTS > REFUTES | _SUPPORTS_ |
|
90 |
| SUPPORTS < REFUTES | _REFUTES_ |
|
91 |
|
92 |
+
Any Claims that have,
|
93 |
+
|
94 |
+
* **_at least one_** SUPPORTS or REFUTES evidence, AND
|
95 |
+
* NEI evidences that can be cast to effective _SUPPORTS_ or _REFUTES_
|
96 |
+
|
97 |
+
are included in the datasset.
|
98 |
|
99 |
##### STSb Derivation
|
100 |
|