gmongaras commited on
Commit
12e0ad9
1 Parent(s): ae8c99b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +17 -0
README.md CHANGED
@@ -1,3 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  Dataset using the bert-cased tokenizer, cutoff sentences to 512 length (not sentence pairs), all sentence pairs extracted.
2
 
3
  Original datasets:
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: text
5
+ dtype: string
6
+ splits:
7
+ - name: train
8
+ num_bytes: 36961083473
9
+ num_examples: 136338653
10
+ download_size: 13895887135
11
+ dataset_size: 36961083473
12
+ configs:
13
+ - config_name: default
14
+ data_files:
15
+ - split: train
16
+ path: data/train-*
17
+ ---
18
  Dataset using the bert-cased tokenizer, cutoff sentences to 512 length (not sentence pairs), all sentence pairs extracted.
19
 
20
  Original datasets: