slhenty commited on
Commit
a73fe7b
1 Parent(s): f9a2ff5

Add usage notes to dataset card

Browse files
Files changed (1) hide show
  1. README.md +44 -3
README.md CHANGED
@@ -25,19 +25,60 @@ The feature style is specified as a named configuration when loading the dataset
25
  Load the **cf-nli** dataset
26
 
27
  ```
28
- # TBD
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  ```
30
 
31
  Load the **cf-nli-nei** dataset
32
 
33
  ```
34
- # TBD
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ```
36
 
37
  Load the **cf-stsb** dataset
38
 
39
  ```
40
- # TBD
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  ```
42
 
43
 
 
25
  Load the **cf-nli** dataset
26
 
27
  ```
28
+ # if datasets not already in your environment
29
+ !pip install datasets
30
+
31
+ from datasets import load_dataset
32
+
33
+ # all splits...
34
+ dd = load_dataset('climate-fever-nli-stsb', 'cf-nli')
35
+
36
+ # ... or specific split (only 'train' is available)
37
+ ds_train = load_dataset('climate-fever-nli-stsb', 'cf-nli', split='train')
38
+
39
+ ## ds_train can now be injected into SentenceBERT training scripts at the point
40
+ ## where individual sentence pairs are aggregated into
41
+ ## {'claim': {'entailment': set(), 'contradiction': set(), 'neutral': set()}} dicts
42
+ ## for further processing into training samples
43
  ```
44
 
45
  Load the **cf-nli-nei** dataset
46
 
47
  ```
48
+ # if datasets not already in your environment
49
+ !pip install datasets
50
+
51
+ from datasets import load_dataset
52
+
53
+ # all splits...
54
+ dd = load_dataset('climate-fever-nli-stsb', 'cf-nli-nei')
55
+
56
+ # ... or specific split (only 'train' is available)
57
+ ds_train = load_dataset('climate-fever-nli-stsb', 'cf-nli-nei', split='train')
58
+
59
+ ## ds_train can now be injected into SentenceBERT training scripts at the point
60
+ ## where individual sentence pairs are aggregated into
61
+ ## {'claim': {'entailment': set(), 'contradiction': set(), 'neutral': set()}} dicts
62
+ ## for further processing into training samples
63
  ```
64
 
65
  Load the **cf-stsb** dataset
66
 
67
  ```
68
+ # if datasets not already in your environment
69
+ !pip install datasets
70
+
71
+ from datasets import load_dataset
72
+
73
+ # all splits...
74
+ dd = load_dataset('climate-fever-nli-stsb', 'cf-stsb')
75
+
76
+ # ... or specific split ('train', 'dev', 'test' available)
77
+ ds_dev = load_dataset('climate-fever-nli-stsb', 'cf-stsb', split='dev')
78
+
79
+ ## ds_dev (or test) can now be injected into SentenceBERT training scripts at the point
80
+ ## where individual sentence pairs are aggregated into
81
+ ## a list of dev (or test) samples
82
  ```
83
 
84