File size: 27,767 Bytes
ccf2a29
 
c5191f4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41afdc4
c5191f4
41afdc4
 
f293776
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9f2e966
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bea2fde
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e7041c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0ff1776
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f65bfc2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d53008c
ccf2a29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
786164c
ccf2a29
786164c
 
8d08c81
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
367e63d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5b977ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d53008c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25feefb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44e142d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
219ccc5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ccf2a29
c5191f4
 
 
 
f293776
 
 
 
9f2e966
 
 
 
bea2fde
 
 
 
e7041c4
 
 
 
0ff1776
 
 
 
f65bfc2
 
 
 
ccf2a29
 
 
 
8d08c81
 
 
 
367e63d
 
 
 
5b977ba
 
 
 
d53008c
 
 
 
25feefb
 
 
 
44e142d
 
 
 
219ccc5
 
 
 
90c2ab8
 
 
1edd557
 
bd63497
6e96380
90c2ab8
 
 
 
 
 
 
 
 
 
 
 
 
 
4965c89
ccf2a29
90c2ab8
 
 
6e96380
90c2ab8
 
 
 
5349ecf
90c2ab8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1edd557
90c2ab8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1edd557
6e96380
90c2ab8
1edd557
6e96380
90c2ab8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1edd557
90c2ab8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
---
dataset_info:
- config_name: arb_Arab
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: educational_value_labels
    sequence: string
  - name: annotator_ids
    sequence: string
  - name: problematic_content_label_present
    dtype: bool
  - name: problematic_content_label_agreement
    dtype: float64
  - name: language_names
    dtype: string
  - name: language_code
    dtype: string
  splits:
  - name: train
    num_bytes: 4913929
    num_examples: 1000
  download_size: 2381622
  dataset_size: 4913929
- config_name: ary_Arab
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: educational_value_labels
    sequence: string
  - name: annotator_ids
    sequence: string
  - name: problematic_content_label_present
    dtype: bool
  - name: problematic_content_label_agreement
    dtype: float64
  - name: language_names
    dtype: string
  - name: language_code
    dtype: string
  splits:
  - name: train
    num_bytes: 3086740
    num_examples: 1000
  download_size: 1515329
  dataset_size: 3086740
- config_name: arz_Arab
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: educational_value_labels
    sequence: string
  - name: annotator_ids
    sequence: string
  - name: problematic_content_label_present
    dtype: bool
  - name: problematic_content_label_agreement
    dtype: float64
  - name: language_names
    dtype: string
  - name: language_code
    dtype: string
  splits:
  - name: train
    num_bytes: 3175887
    num_examples: 1000
  download_size: 1543207
  dataset_size: 3175887
- config_name: bar_Latn
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: educational_value_labels
    sequence: string
  - name: annotator_ids
    sequence: string
  - name: problematic_content_label_present
    dtype: bool
  - name: problematic_content_label_agreement
    dtype: float64
  - name: language_names
    dtype: string
  - name: language_code
    dtype: string
  splits:
  - name: train
    num_bytes: 2494628
    num_examples: 1000
  download_size: 1517640
  dataset_size: 2494628
- config_name: cmn_Hani
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: educational_value_labels
    sequence: string
  - name: annotator_ids
    sequence: string
  - name: problematic_content_label_present
    dtype: bool
  - name: problematic_content_label_agreement
    dtype: float64
  - name: language_names
    dtype: string
  - name: language_code
    dtype: string
  splits:
  - name: train
    num_bytes: 4075430
    num_examples: 1000
  download_size: 2925797
  dataset_size: 4075430
- config_name: dan
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: educational_value_labels
    sequence: string
  - name: annotator_ids
    sequence: string
  - name: problematic_content_label_present
    dtype: bool
  - name: problematic_content_label_agreement
    dtype: float64
  - name: language_names
    dtype: string
  - name: language_code
    dtype: string
  splits:
  - name: train
    num_bytes: 3968961
    num_examples: 1000
  download_size: 2315299
  dataset_size: 3968961
- config_name: dan_Latn
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: educational_value_labels
    sequence: string
  - name: annotator_ids
    sequence: string
  - name: problematic_content_label_present
    dtype: bool
  - name: problematic_content_label_agreement
    dtype: float64
  - name: language_names
    dtype: string
  - name: language_code
    dtype: string
  splits:
  - name: train
    num_bytes: 3978961
    num_examples: 1000
  download_size: 2315349
  dataset_size: 3978961
- config_name: default
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: educational_value_labels
    sequence: string
  - name: annotator_ids
    sequence: string
  - name: problematic_content_label_present
    dtype: bool
  - name: problematic_content_label_agreement
    dtype: float64
  - name: language_names
    dtype: string
  - name: language_code
    dtype: string
  splits:
  - name: train
    num_bytes: 73894945
    num_examples: 13000
  download_size: 38830605
  dataset_size: 73894945
- config_name: fas_Arab
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: educational_value_labels
    sequence: string
  - name: annotator_ids
    sequence: string
  - name: problematic_content_label_present
    dtype: bool
  - name: problematic_content_label_agreement
    dtype: float64
  - name: language_names
    dtype: string
  - name: language_code
    dtype: string
  splits:
  - name: train
    num_bytes: 5759890
    num_examples: 1000
  download_size: 2662440
  dataset_size: 5759890
- config_name: gmh_Latn
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: educational_value_labels
    sequence: string
  - name: annotator_ids
    sequence: string
  - name: problematic_content_label_present
    dtype: bool
  - name: problematic_content_label_agreement
    dtype: float64
  - name: language_names
    dtype: string
  - name: language_code
    dtype: string
  splits:
  - name: train
    num_bytes: 16120134
    num_examples: 1000
  download_size: 9109369
  dataset_size: 16120134
- config_name: hin_Deva
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: educational_value_labels
    sequence: string
  - name: annotator_ids
    sequence: string
  - name: problematic_content_label_present
    dtype: bool
  - name: problematic_content_label_agreement
    dtype: float64
  - name: language_names
    dtype: string
  - name: language_code
    dtype: string
  splits:
  - name: train
    num_bytes: 6238691
    num_examples: 1000
  download_size: 2358281
  dataset_size: 6238691
- config_name: lvs
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: educational_value_labels
    sequence: string
  - name: annotator_ids
    sequence: string
  - name: problematic_content_label_present
    dtype: bool
  - name: problematic_content_label_agreement
    dtype: float64
  - name: language_names
    dtype: string
  - name: language_code
    dtype: string
  splits:
  - name: train
    num_bytes: 4598981
    num_examples: 1000
  download_size: 2807485
  dataset_size: 4598981
- config_name: lvs_Latn
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: educational_value_labels
    sequence: string
  - name: annotator_ids
    sequence: string
  - name: problematic_content_label_present
    dtype: bool
  - name: problematic_content_label_agreement
    dtype: float64
  - name: language_names
    dtype: string
  - name: language_code
    dtype: string
  splits:
  - name: train
    num_bytes: 4608981
    num_examples: 1000
  download_size: 2807535
  dataset_size: 4608981
- config_name: rus_Cyrl
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: educational_value_labels
    sequence: string
  - name: annotator_ids
    sequence: string
  - name: problematic_content_label_present
    dtype: bool
  - name: problematic_content_label_agreement
    dtype: float64
  - name: language_names
    dtype: string
  - name: language_code
    dtype: string
  splits:
  - name: train
    num_bytes: 9674640
    num_examples: 1000
  download_size: 4687716
  dataset_size: 9674640
- config_name: tat_Cyrl
  features:
  - name: id
    dtype: string
  - name: text
    dtype: string
  - name: educational_value_labels
    sequence: string
  - name: annotator_ids
    sequence: string
  - name: problematic_content_label_present
    dtype: bool
  - name: problematic_content_label_agreement
    dtype: float64
  - name: language_names
    dtype: string
  - name: language_code
    dtype: string
  splits:
  - name: train
    num_bytes: 6697853
    num_examples: 1000
  download_size: 3270919
  dataset_size: 6697853
configs:
- config_name: arb_Arab
  data_files:
  - split: train
    path: arb_Arab/train-*
- config_name: ary_Arab
  data_files:
  - split: train
    path: ary_Arab/train-*
- config_name: arz_Arab
  data_files:
  - split: train
    path: arz_Arab/train-*
- config_name: bar_Latn
  data_files:
  - split: train
    path: bar_Latn/train-*
- config_name: cmn_Hani
  data_files:
  - split: train
    path: cmn_Hani/train-*
- config_name: dan
  data_files:
  - split: train
    path: dan/train-*
- config_name: dan_Latn
  data_files:
  - split: train
    path: dan_Latn/train-*
- config_name: default
  data_files:
  - split: train
    path: data/train-*
- config_name: fas_Arab
  data_files:
  - split: train
    path: fas_Arab/train-*
- config_name: gmh_Latn
  data_files:
  - split: train
    path: gmh_Latn/train-*
- config_name: hin_Deva
  data_files:
  - split: train
    path: hin_Deva/train-*
- config_name: lvs
  data_files:
  - split: train
    path: lvs/train-*
- config_name: lvs_Latn
  data_files:
  - split: train
    path: lvs_Latn/train-*
- config_name: rus_Cyrl
  data_files:
  - split: train
    path: rus_Cyrl/train-*
- config_name: tat_Cyrl
  data_files:
  - split: train
    path: tat_Cyrl/train-*
tags:
- argilla
- data-is-better-together
task_categories:
- text-classification
- text-classification
- text-classification
language:
- lvs
- fas
- dan
- arz
- ary
- arb
- tat
- rus
- gmh
- bar
- hin
- arb
- cmn
pretty_name: FineWeb-c
---
# FineWeb-C: Educational content in many languages, labelled by the community

<center>
    <img src="https://huggingface.co/spaces/data-is-better-together/fineweb-communications-pack/resolve/main/fineweb-c-card-header.png" alt="FineWeb 2: A sparkling update with 1000s of languages">
</center>

> *Multilingual data is better together!*

**Note**: This datasets and the dataset card are works in progress. You can help contribute to the dataset [here](https://huggingface.co/spaces/data-is-better-together/fineweb-c) and join the community discussions in [rocket chat](https://huggingface.co/spaces/HuggingFaceFW/discussion)! 

## What is this?

This is a collaborative, community-driven project that expands upon the [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) dataset. Our goal is to create high-quality educational content annotations across hundreds of languages. 

By enhancing web content with these annotations, we aim to improve the development of Large Language Models (LLMs) in all languages, making AI technology more accessible and effective globally.

The annotations in this dataset will help train AI systems to automatically identify high-quality educational content in more languages and in turn help build better Large Language Models for all languages.

### What the community is doing:

- For a given language, look at a page of web content from the [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) dataset in Argilla.
- Rate how educational the content is.
- Flag problematic content i.e. content that is malformed or in the wrong language.

Once a language reaches 1,000 annotations, the dataset will be included in this dataset! Alongside rating the educational quality of the content, different language communities are discussing other ways to improve the quality of data for their language in our [rocket chat](https://chat.huggingface.co/channel/fineweb-c) discussion channel.

### What's been done so far?

So far **318** members of the Hugging Face community have submitted **32,863** annotations.

The following languages have reached the 1,000 annotation threshold to be included in the dataset. We'll keep updating this dataset as more annotations are added!

| Language Code | Language Name | Completed Annotations | Annotators |
|--------------|---------------|---------------------|------------|
| arb_Arab | Standard Arabic | 1000 | 10 |
| ary_Arab | Moroccan Arabic | 1000 | 15 |
| arz_Arab | Egyptian Arabic | 1000 | 9 |
| bar_Latn | Bavarian | 1000 | 1 |
| cmn_Hani | Mandarin Chinese | 1000 | 3 |
| dan_Latn | Danish | 1000 | 18 |
| fas_Arab | Persian | 1000 | 3 |
| gmh_Latn | Middle High German | 1000 | 1 |
| hin_Deva | Hindi | 1000 | 3 |
| lvs_Latn | Standard Latvian | 1000 | 5 |
| rus_Cyrl | Russian | 1000 | 4 |
| tat_Cyrl | Tatar | 1000 | 7 |


_You can help contribute to the dataset [here](https://huggingface.co/spaces/data-is-better-together/fineweb-c)._

Below is an overview of the number of annotations submitted for each language (updated daily).

<iframe src="https://huggingface.co/datasets/data-is-better-together/fineweb-c-progress/embed/sql-console/dhn8hw-" frameborder="0" width="100%" height="560px"></iframe>

### Why are we doing this?

There are many languages in the world where no high quality LLMs exist. Having high quality data is a central part of building high quality LLMs. [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) is a crucial step in improving the availability of high quality data for many languages. We plan to go a step further.

#### Fineweb-Edu for every language?

[FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) is a dataset built on the original [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) dataset. The dataset was constructed by developing an educational quality classifier using annotations generated by LLama3-70B-Instruct and using this classifier to retain only the most educational web pages. 

FineWeb-Edu outperforms FineWeb on popular benchmark. Crucially, using this approach reduces the amount of data needed to train a high quality LLM reducing the barrier to building a high quality LLM for many languages.

We want to make it possible to build FineWeb-Edu datasets for all the worlds languages. To do this we need annotations in order to train an educational quality classifier.

This in turn will allow us to build the next generation of Large Language Models for many languages.

#### Why not use LLMs to annotate the data?

For high resources languages, using an LLM to generate educational quality annotations can be a good solution. However, for many languages LLMs are not able to generate high quality annotations — or we don't have enough data to validate whether the annotations are correct.

## How can I help?

You can help by contributing to the dataset [here](https://huggingface.co/spaces/data-is-better-together/fineweb-c) and join the community discussions in [rocket chat](https://chat.huggingface.co/channel/fineweb-c)! 

## Why would I bother to contribute to this dataset?

Your contributions directly shape the future of AI in your language. Here's why this matters:

1. Break the AI language barrier: Most commercial AI companies focus on profitable languages, leaving many communities behind. Your work helps bring AI capabilities to more languages.

2. Keep it open: Unlike proprietary datasets locked away by companies, FineWeb2-C is an open dataset. This means anyone can use it to build AI systems that truly serve their community's needs. Through this open approach we also learn about which approaches work best for different languages.

3. Be part of something bigger: Just as Wikipedia showed how volunteers can build invaluable resources, the Hugging Face community has created numerous open models and datasets. You're joining a movement to democratize AI technology.

Every annotation counts. Whether you can contribute ten minutes or ten hours, your input helps build a more inclusive future for AI technology 🤗 

## Who contributed to this dataset so far? 

These are the top 10 contributors to this release of the dataset. Make sure to give them a follow on the Hub to show your appreciation!

| Hugging Face Username | Submissions |
|----------|------------|
| [stefan-it](https://huggingface.co/stefan-it) | 2,011 |
| [hasnachouikhi](https://huggingface.co/hasnachouikhi) | 1,865 |
| [catastropiyush](https://huggingface.co/catastropiyush) | 1,053 |
| [vikkormallansohn](https://huggingface.co/vikkormallansohn) | 1,000 |
| [rasgaard](https://huggingface.co/rasgaard) | 1,000 |
| [Maani](https://huggingface.co/Maani) | 985 |
| [paperplanedeemo](https://huggingface.co/paperplanedeemo) | 978 |
| [JakobBlaa](https://huggingface.co/JakobBlaa) | 978 |
| [anhha9](https://huggingface.co/anhha9) | 927 |
| [Aivis](https://huggingface.co/Aivis) | 894 |


Data work is the under appreciated foundation of AI and ML. This dataset is built by the community for the community. Below is a leaderboard that is updated daily and shows all the contributors to this annotation effort. 

<iframe src="https://huggingface.co/datasets/data-is-better-together/fineweb-c-progress/embed/sql-console/DJ2n1Z0" frameborder="0" width="100%" height="560px"></iframe>


#### Language-specific Contributors

Below you can find a list of all the contributors to this release of the dataset for each language ❤️

<details>
<summary>Detailed Contributor Statistics for each language</summary>



### Bavarian (bar_Latn)

<details>
<summary>User Statistics Table (Minimum 1 submissions)</summary>

| Username | Submissions |
|----------|------------|
| [stefan-it](https://huggingface.co/stefan-it) | 1000 |
</details>



### Danish (dan_Latn)

<details>
<summary>User Statistics Table (Minimum 1 submissions)</summary>

| Username | Submissions |
|----------|------------|
| [rasgaard](https://huggingface.co/rasgaard) | 1000 |
| [JakobBlaa](https://huggingface.co/JakobBlaa) | 978 |
| [saattrupdan](https://huggingface.co/saattrupdan) | 200 |
| [FrLars21](https://huggingface.co/FrLars21) | 80 |
| [markhougaard](https://huggingface.co/markhougaard) | 72 |
| [KennethEnevoldsen](https://huggingface.co/KennethEnevoldsen) | 44 |
| [Apasalic](https://huggingface.co/Apasalic) | 33 |
| [tqvist](https://huggingface.co/tqvist) | 33 |
| [cnila](https://huggingface.co/cnila) | 31 |
| [Soeren-B](https://huggingface.co/Soeren-B) | 28 |
| [KristianL](https://huggingface.co/KristianL) | 22 |
| [mathiasn1](https://huggingface.co/mathiasn1) | 16 |
| [ITK-dev](https://huggingface.co/ITK-dev) | 12 |
| [jannikskytt](https://huggingface.co/jannikskytt) | 8 |
| [AndreasLH](https://huggingface.co/AndreasLH) | 7 |
| [perlausten](https://huggingface.co/perlausten) | 5 |
| [sorenmulli](https://huggingface.co/sorenmulli) | 3 |
| [organicoder](https://huggingface.co/organicoder) | 1 |
</details>



### Egyptian Arabic (arz_Arab)

<details>
<summary>User Statistics Table (Minimum 1 submissions)</summary>

| Username | Submissions |
|----------|------------|
| [mmhamdy](https://huggingface.co/mmhamdy) | 734 |
| [aishahamdy](https://huggingface.co/aishahamdy) | 141 |
| [oumayma03](https://huggingface.co/oumayma03) | 54 |
| [omarelshehy](https://huggingface.co/omarelshehy) | 46 |
| [ghada00](https://huggingface.co/ghada00) | 14 |
| [heba1998](https://huggingface.co/heba1998) | 10 |
| [chemouda](https://huggingface.co/chemouda) | 3 |
| [aammari](https://huggingface.co/aammari) | 2 |
| [amreleraqi](https://huggingface.co/amreleraqi) | 1 |
</details>



### Hindi (hin_Deva)

<details>
<summary>User Statistics Table (Minimum 1 submissions)</summary>

| Username | Submissions |
|----------|------------|
| [catastropiyush](https://huggingface.co/catastropiyush) | 926 |
| [pp](https://huggingface.co/pp) | 73 |
| [Urmish](https://huggingface.co/Urmish) | 1 |
</details>



### Mandarin Chinese (cmn_Hani)

<details>
<summary>User Statistics Table (Minimum 1 submissions)</summary>

| Username | Submissions |
|----------|------------|
| [paperplanedeemo](https://huggingface.co/paperplanedeemo) | 978 |
| [guokan-shang](https://huggingface.co/guokan-shang) | 12 |
| [AdinaY](https://huggingface.co/AdinaY) | 10 |
</details>



### Middle High German (gmh_Latn)

<details>
<summary>User Statistics Table (Minimum 1 submissions)</summary>

| Username | Submissions |
|----------|------------|
| [stefan-it](https://huggingface.co/stefan-it) | 1000 |
</details>



### Moroccan Arabic (ary_Arab)

<details>
<summary>User Statistics Table (Minimum 1 submissions)</summary>

| Username | Submissions |
|----------|------------|
| [Ihssane123](https://huggingface.co/Ihssane123) | 499 |
| [imomayiz](https://huggingface.co/imomayiz) | 234 |
| [NouhailaChab05](https://huggingface.co/NouhailaChab05) | 120 |
| [nouamanetazi](https://huggingface.co/nouamanetazi) | 58 |
| [master12gx](https://huggingface.co/master12gx) | 37 |
| [oumayma03](https://huggingface.co/oumayma03) | 21 |
| [Overowser](https://huggingface.co/Overowser) | 14 |
| [SoufianeDahimi](https://huggingface.co/SoufianeDahimi) | 12 |
| [adnananouzla](https://huggingface.co/adnananouzla) | 11 |
| [alielfilali01](https://huggingface.co/alielfilali01) | 3 |
| [staghado](https://huggingface.co/staghado) | 3 |
| [olafdil](https://huggingface.co/olafdil) | 2 |
| [maghwa](https://huggingface.co/maghwa) | 2 |
| [0xTechVio](https://huggingface.co/0xTechVio) | 1 |
| [maggierphunt](https://huggingface.co/maggierphunt) | 1 |
</details>



### Persian (fas_Arab)

<details>
<summary>User Statistics Table (Minimum 1 submissions)</summary>

| Username | Submissions |
|----------|------------|
| [Maani](https://huggingface.co/Maani) | 985 |
| [mehrdadazizi](https://huggingface.co/mehrdadazizi) | 14 |
| [kargaranamir](https://huggingface.co/kargaranamir) | 1 |
</details>



### Russian (rus_Cyrl)

<details>
<summary>User Statistics Table (Minimum 1 submissions)</summary>

| Username | Submissions |
|----------|------------|
| [kitano-o](https://huggingface.co/kitano-o) | 593 |
| [kristaller486](https://huggingface.co/kristaller486) | 396 |
| [knyazer](https://huggingface.co/knyazer) | 9 |
| [alialek](https://huggingface.co/alialek) | 5 |
</details>



### Standard Arabic (arb_Arab)

<details>
<summary>User Statistics Table (Minimum 1 submissions)</summary>

| Username | Submissions |
|----------|------------|
| [hasnachouikhi](https://huggingface.co/hasnachouikhi) | 1000 |
| [alielfilali01](https://huggingface.co/alielfilali01) | 4 |
</details>



### Standard Arabic (arb_Arab)

<details>
<summary>User Statistics Table (Minimum 1 submissions)</summary>

| Username | Submissions |
|----------|------------|
| [hasnachouikhi](https://huggingface.co/hasnachouikhi) | 865 |
| [chemouda](https://huggingface.co/chemouda) | 102 |
| [oumayma03](https://huggingface.co/oumayma03) | 12 |
| [ahmedselhady](https://huggingface.co/ahmedselhady) | 9 |
| [staghado](https://huggingface.co/staghado) | 7 |
| [alielfilali01](https://huggingface.co/alielfilali01) | 4 |
| [YassineL](https://huggingface.co/YassineL) | 2 |
| [maggierphunt](https://huggingface.co/maggierphunt) | 1 |
</details>



### Standard Latvian (lvs_Latn)

<details>
<summary>User Statistics Table (Minimum 1 submissions)</summary>

| Username | Submissions |
|----------|------------|
| [Aivis](https://huggingface.co/Aivis) | 894 |
| [slckl](https://huggingface.co/slckl) | 48 |
| [finnayeet](https://huggingface.co/finnayeet) | 33 |
| [zemais](https://huggingface.co/zemais) | 26 |
| [minem99](https://huggingface.co/minem99) | 2 |
</details>



### Tatar (tat_Cyrl)

<details>
<summary>User Statistics Table (Minimum 1 submissions)</summary>

| Username | Submissions |
|----------|------------|
| [tagay1n](https://huggingface.co/tagay1n) | 515 |
| [gaydmi](https://huggingface.co/gaydmi) | 313 |
| [inov8](https://huggingface.co/inov8) | 126 |
| [iamdweebish](https://huggingface.co/iamdweebish) | 42 |
| [Giniyatullina](https://huggingface.co/Giniyatullina) | 6 |
| [Empirenull](https://huggingface.co/Empirenull) | 3 |
| [Khusaenov](https://huggingface.co/Khusaenov) | 1 |
</details>



</details>

## Using this dataset

The dataset has a `default` config that contains all the language and configs per language.

To download the dataset using the Hugging Face `datasets` library, you can use the following code:

```python
from datasets import load_dataset

dataset = load_dataset("data-is-better-together/fineweb-c-edu")
```

To download a specific language, you can use the following code:

```python
dataset = load_dataset("data-is-better-together/fineweb-c-edu", language="cmn_Hani")
```

You can also download the dataset using Pandas

```python
import pandas as pd

# Login using e.g. `huggingface-cli login` to access this dataset
df = pd.read_parquet("hf://datasets/data-is-better-together/fineweb-c-edu/arb_Arab/train-00000-of-00001.parquet")
```

or polars

```python

import polars as pl

# Login using e.g. `huggingface-cli login` to access this dataset
df = pl.read_parquet('hf://datasets/davanstrien/fineweb-c-exported-data-test/arb_Arab/train-00000-of-00001.parquet')
```

## Data Fields

The dataset contains the following columns:

| Column Name                         | Type         | Description                                                                                    |
| ----------------------------------- | ------------ | ---------------------------------------------------------------------------------------------- |
| id                                  | string       | A unique identifier for each annotation record                                                 |
| text                                | string       | The text of the web page                                                                       |
| educational_value_labels            | list[string] | A list of labels indicating the educational value of the web page rated by the community       |
| annotator_ids                       | string       | A string ID for the annotator                                                                  |
| problematic_content_label_present   | boolean      | A flag indicating the presence of at leaste one 'problematic' label being assigned to the text |
| problematic_content_label_agreement | float        | The agreement of the annotator with the problematic content label                              |
| language_names                      | str          | The name of the language page                                                                  |
| language_code                       | str          | The code of the language                                                                       |
|                                     |              |                                                                                                |

The main things to note (we'll update this as we get more data)

- Some languages already have multiple annotations per page. So far we haven't done any processing on these rows so people are free to calculate the agreement of the annotators in whatever way they want.
- For languages with many active annotators, we may increase the overlap of annotations over time to further improve the quality of the dataset.
- Some languages contain many `problematic content` labels. These often occur when the language detection was not correct. There is a `problematic_content_label_present` boolean column that indicates if the page contains at least one `problematic content` label. If you want to remove these rows you can do so by filtering on this column. Alternatively, you can use the `problematic_content_label_agreement` column to filter on the agreement of the annotators i.e. only remove rows where the annotators agree on the `problematic content` label. For many of the most active language efforts we're working with the community to improve the quality of the data so we hope the number of `problematic content` labels will decrease over time. 


## Licensing Information

The dataset is released under the Open Data Commons Attribution License (ODC-By) v1.0 license. The use of this dataset is also subject to CommonCrawl's Terms of Use.

## Citation


_Citation information needs to be added_


## Last Updated

2024-12-20