gmongaras commited on
Commit
1abdea1
1 Parent(s): b6ebc58

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +21 -0
README.md CHANGED
@@ -1,3 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  Dataset using the bert-cased tokenizer, cutoff sentences to 512 length (not sentence pairs), all sentence pairs extracted.
2
 
3
  Original datasets:
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: input_ids
5
+ sequence: int32
6
+ - name: token_type_ids
7
+ sequence: int8
8
+ - name: attention_mask
9
+ sequence: int8
10
+ splits:
11
+ - name: train
12
+ num_bytes: 51067549265.998314
13
+ num_examples: 131569119
14
+ download_size: 15915934708
15
+ dataset_size: 51067549265.998314
16
+ configs:
17
+ - config_name: default
18
+ data_files:
19
+ - split: train
20
+ path: data/train-*
21
+ ---
22
  Dataset using the bert-cased tokenizer, cutoff sentences to 512 length (not sentence pairs), all sentence pairs extracted.
23
 
24
  Original datasets: