Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Qingyun commited on
Commit
343497f
1 Parent(s): d642de4

Upload dataset

Browse files
CC-MAIN-2016-07/train-00000-of-00010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80476e23aca3cc403d714778783cc8d3145670f268644c092af4d58b0f1f275d
3
+ size 399613392
CC-MAIN-2016-07/train-00001-of-00010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bd0c6885e45348c21886b4b6126a776d1d33b0328178a357f33899d08ef00c9
3
+ size 398260097
CC-MAIN-2016-07/train-00002-of-00010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc88cfd32967832593e30d2d432bd6156c024970d425c25148117e3fe044ac21
3
+ size 402059394
CC-MAIN-2016-07/train-00003-of-00010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f95e8d079eadd4c698a9ee3bc7945f5beb0b58bcdd93602192ea27ac4739ec36
3
+ size 399636022
CC-MAIN-2016-07/train-00004-of-00010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d19c2c674d985cfa9723c37806e8d04097653712d0cc7129a3b5601c7fc3eea
3
+ size 403647181
CC-MAIN-2016-07/train-00005-of-00010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:060b8a01428e5faf9bdc82b4a307d235da392751ed36b069dfacd842ad6ccc77
3
+ size 400782106
CC-MAIN-2016-07/train-00006-of-00010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bfb170b913807710733eccc7e443c158b50ff2147f7375e24b78491652ee4f86
3
+ size 400654320
CC-MAIN-2016-07/train-00007-of-00010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c7e68f1165bf32af554ebc359fb1836477f12595be1779387132d98d27fcaea
3
+ size 400285059
CC-MAIN-2016-07/train-00008-of-00010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0084d377697c9bc3a9efd2a4b4fddb3814c5e2380b87b5c2933d980c6356af59
3
+ size 402092143
CC-MAIN-2016-07/train-00009-of-00010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb2c6b2b940396bb11b7f03f25a45f509d52ef3d161c225030e2d38dad6a80e4
3
+ size 398570071
README.md CHANGED
@@ -1048,6 +1048,58 @@ dataset_info:
1048
  num_examples: 1537468
1049
  download_size: 3489600630
1050
  dataset_size: 8343050753
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1051
  configs:
1052
  - config_name: CC-MAIN-2013-20
1053
  data_files:
@@ -1129,6 +1181,10 @@ configs:
1129
  data_files:
1130
  - split: train
1131
  path: CC-MAIN-2015-48/train-*
 
 
 
 
1132
  ---
1133
 
1134
  We are uploading the dataset files ~
 
1048
  num_examples: 1537468
1049
  download_size: 3489600630
1050
  dataset_size: 8343050753
1051
+ - config_name: CC-MAIN-2016-07
1052
+ features:
1053
+ - name: general_metadata
1054
+ struct:
1055
+ - name: domain
1056
+ sequence: string
1057
+ - name: fluency_prob
1058
+ dtype: float64
1059
+ - name: id
1060
+ dtype: string
1061
+ - name: non_advertisement_prob
1062
+ dtype: float64
1063
+ - name: politics_prob
1064
+ dtype: float64
1065
+ - name: porn_prob
1066
+ dtype: float64
1067
+ - name: toxic_prob
1068
+ dtype: float64
1069
+ - name: url
1070
+ dtype: string
1071
+ - name: images
1072
+ sequence: string
1073
+ - name: texts
1074
+ sequence: string
1075
+ - name: metadata
1076
+ list:
1077
+ - name: aesthetic_prob
1078
+ dtype: float64
1079
+ - name: bytes
1080
+ dtype: int64
1081
+ - name: d_hash
1082
+ dtype: string
1083
+ - name: d_hash_dup_count
1084
+ dtype: int64
1085
+ - name: height
1086
+ dtype: int64
1087
+ - name: img_url_sha
1088
+ dtype: string
1089
+ - name: p_hash
1090
+ dtype: string
1091
+ - name: p_hash_dup_count
1092
+ dtype: int64
1093
+ - name: unsafe_prob
1094
+ dtype: float64
1095
+ - name: width
1096
+ dtype: int64
1097
+ splits:
1098
+ - name: train
1099
+ num_bytes: 9329220105
1100
+ num_examples: 1738650
1101
+ download_size: 4005599785
1102
+ dataset_size: 9329220105
1103
  configs:
1104
  - config_name: CC-MAIN-2013-20
1105
  data_files:
 
1181
  data_files:
1182
  - split: train
1183
  path: CC-MAIN-2015-48/train-*
1184
+ - config_name: CC-MAIN-2016-07
1185
+ data_files:
1186
+ - split: train
1187
+ path: CC-MAIN-2016-07/train-*
1188
  ---
1189
 
1190
  We are uploading the dataset files ~