Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Qingyun commited on
Commit
a65868a
1 Parent(s): 343497f

Upload dataset

Browse files
CC-MAIN-2016-18/train-00000-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f19835f689a22cd760517e6b1ca7b4917955af167b21f9b69f2038d427332ddf
3
+ size 419681803
CC-MAIN-2016-18/train-00001-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af6856047dffc49924655114a3b41d475872cf7ce2367b940ae659839eb8d932
3
+ size 420469279
CC-MAIN-2016-18/train-00002-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9ca84b526486eb2b85429f1150acdd528f3e18f6e7b89ebb8818d5014023ad9
3
+ size 417587750
CC-MAIN-2016-18/train-00003-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3d3c1717aeb00bd5164a15821717e70c02e799ab1af1df3dfbce34e757fdb82
3
+ size 417761984
README.md CHANGED
@@ -1100,6 +1100,58 @@ dataset_info:
1100
  num_examples: 1738650
1101
  download_size: 4005599785
1102
  dataset_size: 9329220105
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1103
  configs:
1104
  - config_name: CC-MAIN-2013-20
1105
  data_files:
@@ -1185,6 +1237,10 @@ configs:
1185
  data_files:
1186
  - split: train
1187
  path: CC-MAIN-2016-07/train-*
 
 
 
 
1188
  ---
1189
 
1190
  We are uploading the dataset files ~
 
1100
  num_examples: 1738650
1101
  download_size: 4005599785
1102
  dataset_size: 9329220105
1103
+ - config_name: CC-MAIN-2016-18
1104
+ features:
1105
+ - name: general_metadata
1106
+ struct:
1107
+ - name: domain
1108
+ sequence: string
1109
+ - name: fluency_prob
1110
+ dtype: float64
1111
+ - name: id
1112
+ dtype: string
1113
+ - name: non_advertisement_prob
1114
+ dtype: float64
1115
+ - name: politics_prob
1116
+ dtype: float64
1117
+ - name: porn_prob
1118
+ dtype: float64
1119
+ - name: toxic_prob
1120
+ dtype: float64
1121
+ - name: url
1122
+ dtype: string
1123
+ - name: images
1124
+ sequence: string
1125
+ - name: texts
1126
+ sequence: string
1127
+ - name: metadata
1128
+ list:
1129
+ - name: aesthetic_prob
1130
+ dtype: float64
1131
+ - name: bytes
1132
+ dtype: int64
1133
+ - name: d_hash
1134
+ dtype: string
1135
+ - name: d_hash_dup_count
1136
+ dtype: int64
1137
+ - name: height
1138
+ dtype: int64
1139
+ - name: img_url_sha
1140
+ dtype: string
1141
+ - name: p_hash
1142
+ dtype: string
1143
+ - name: p_hash_dup_count
1144
+ dtype: int64
1145
+ - name: unsafe_prob
1146
+ dtype: float64
1147
+ - name: width
1148
+ dtype: int64
1149
+ splits:
1150
+ - name: train
1151
+ num_bytes: 3897220786
1152
+ num_examples: 747570
1153
+ download_size: 1675500816
1154
+ dataset_size: 3897220786
1155
  configs:
1156
  - config_name: CC-MAIN-2013-20
1157
  data_files:
 
1237
  data_files:
1238
  - split: train
1239
  path: CC-MAIN-2016-07/train-*
1240
+ - config_name: CC-MAIN-2016-18
1241
+ data_files:
1242
+ - split: train
1243
+ path: CC-MAIN-2016-18/train-*
1244
  ---
1245
 
1246
  We are uploading the dataset files ~