Reorder split names

#4
by albertvillanova HF staff - opened
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -38,15 +38,15 @@ dataset_info:
38
  - name: id
39
  dtype: string
40
  splits:
41
- - name: test
42
- num_bytes: 49925756
43
- num_examples: 11490
44
  - name: train
45
  num_bytes: 1261704133
46
  num_examples: 287113
47
  - name: validation
48
  num_bytes: 57732436
49
  num_examples: 13368
 
 
 
50
  download_size: 585439472
51
  dataset_size: 1369362325
52
  - config_name: 1.0.0
@@ -58,15 +58,15 @@ dataset_info:
58
  - name: id
59
  dtype: string
60
  splits:
61
- - name: test
62
- num_bytes: 49925756
63
- num_examples: 11490
64
  - name: train
65
  num_bytes: 1261704133
66
  num_examples: 287113
67
  - name: validation
68
  num_bytes: 57732436
69
  num_examples: 13368
 
 
 
70
  download_size: 585439472
71
  dataset_size: 1369362325
72
  - config_name: 2.0.0
@@ -78,15 +78,15 @@ dataset_info:
78
  - name: id
79
  dtype: string
80
  splits:
81
- - name: test
82
- num_bytes: 49925756
83
- num_examples: 11490
84
  - name: train
85
  num_bytes: 1261704133
86
  num_examples: 287113
87
  - name: validation
88
  num_bytes: 57732436
89
  num_examples: 13368
 
 
 
90
  download_size: 585439472
91
  dataset_size: 1369362325
92
  ---
@@ -277,4 +277,4 @@ The CNN / Daily Mail dataset version 1.0.0 is released under the [Apache-2.0 Lic
277
 
278
  ### Contributions
279
 
280
- Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@jbragg](https://github.com/jbragg), [@patrickvonplaten](https://github.com/patrickvonplaten) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset.
 
38
  - name: id
39
  dtype: string
40
  splits:
 
 
 
41
  - name: train
42
  num_bytes: 1261704133
43
  num_examples: 287113
44
  - name: validation
45
  num_bytes: 57732436
46
  num_examples: 13368
47
+ - name: test
48
+ num_bytes: 49925756
49
+ num_examples: 11490
50
  download_size: 585439472
51
  dataset_size: 1369362325
52
  - config_name: 1.0.0
 
58
  - name: id
59
  dtype: string
60
  splits:
 
 
 
61
  - name: train
62
  num_bytes: 1261704133
63
  num_examples: 287113
64
  - name: validation
65
  num_bytes: 57732436
66
  num_examples: 13368
67
+ - name: test
68
+ num_bytes: 49925756
69
+ num_examples: 11490
70
  download_size: 585439472
71
  dataset_size: 1369362325
72
  - config_name: 2.0.0
 
78
  - name: id
79
  dtype: string
80
  splits:
 
 
 
81
  - name: train
82
  num_bytes: 1261704133
83
  num_examples: 287113
84
  - name: validation
85
  num_bytes: 57732436
86
  num_examples: 13368
87
+ - name: test
88
+ num_bytes: 49925756
89
+ num_examples: 11490
90
  download_size: 585439472
91
  dataset_size: 1369362325
92
  ---
 
277
 
278
  ### Contributions
279
 
280
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@jbragg](https://github.com/jbragg), [@patrickvonplaten](https://github.com/patrickvonplaten) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset.