Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Qingyun commited on
Commit
236e025
1 Parent(s): 926d239

Upload dataset

Browse files
CC-MAIN-2014-49/train-00000-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47bd3381dba96dcdbae83d003cf570b30eb10a10fcc5238da3b417fc53d12c69
3
+ size 396981461
CC-MAIN-2014-49/train-00001-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7d5aa7f0f8b7b73178882f3b16e62632fd26262e4c84f3a51c3abf4c494f811
3
+ size 395863561
CC-MAIN-2014-49/train-00002-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a51edbdab206bfc2c2020ab057fbad20045a21b65201e4c95bab474e3c1d275
3
+ size 396233191
CC-MAIN-2014-49/train-00003-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dab19fa7e797adcbad6d55d6ca98c60dbd366c8bc2ab3c2d7c286b58079df8d4
3
+ size 396810368
CC-MAIN-2014-49/train-00004-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f588799b3e57e68421e1f680d03185a74e5b3a2ace2a2e6a0cc62dca027f5e3
3
+ size 396840338
README.md CHANGED
@@ -424,6 +424,58 @@ dataset_info:
424
  num_examples: 1511931
425
  download_size: 3505766162
426
  dataset_size: 7732679416
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
427
  configs:
428
  - config_name: CC-MAIN-2013-20
429
  data_files:
@@ -457,6 +509,10 @@ configs:
457
  data_files:
458
  - split: train
459
  path: CC-MAIN-2014-42/train-*
 
 
 
 
460
  ---
461
 
462
  We are uploading the dataset files ~
 
424
  num_examples: 1511931
425
  download_size: 3505766162
426
  dataset_size: 7732679416
427
+ - config_name: CC-MAIN-2014-49
428
+ features:
429
+ - name: general_metadata
430
+ struct:
431
+ - name: domain
432
+ sequence: string
433
+ - name: fluency_prob
434
+ dtype: float64
435
+ - name: id
436
+ dtype: string
437
+ - name: non_advertisement_prob
438
+ dtype: float64
439
+ - name: politics_prob
440
+ dtype: float64
441
+ - name: porn_prob
442
+ dtype: float64
443
+ - name: toxic_prob
444
+ dtype: float64
445
+ - name: url
446
+ dtype: string
447
+ - name: images
448
+ sequence: string
449
+ - name: texts
450
+ sequence: string
451
+ - name: metadata
452
+ list:
453
+ - name: aesthetic_prob
454
+ dtype: float64
455
+ - name: bytes
456
+ dtype: int64
457
+ - name: d_hash
458
+ dtype: string
459
+ - name: d_hash_dup_count
460
+ dtype: int64
461
+ - name: height
462
+ dtype: int64
463
+ - name: img_url_sha
464
+ dtype: string
465
+ - name: p_hash
466
+ dtype: string
467
+ - name: p_hash_dup_count
468
+ dtype: int64
469
+ - name: unsafe_prob
470
+ dtype: float64
471
+ - name: width
472
+ dtype: int64
473
+ splits:
474
+ - name: train
475
+ num_bytes: 4473311810
476
+ num_examples: 837735
477
+ download_size: 1982728919
478
+ dataset_size: 4473311810
479
  configs:
480
  - config_name: CC-MAIN-2013-20
481
  data_files:
 
509
  data_files:
510
  - split: train
511
  path: CC-MAIN-2014-42/train-*
512
+ - config_name: CC-MAIN-2014-49
513
+ data_files:
514
+ - split: train
515
+ path: CC-MAIN-2014-49/train-*
516
  ---
517
 
518
  We are uploading the dataset files ~