system HF staff commited on
Commit
4350e91
1 Parent(s): 69c26e5

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

Files changed (2) hide show
  1. README.md +21 -21
  2. oscar.py +4 -2
README.md CHANGED
@@ -214,30 +214,30 @@ task_ids:
214
  - [Citation Information](#citation-information)
215
  - [Contributions](#contributions)
216
 
217
- ## [Dataset Description](#dataset-description)
218
 
219
  - **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
220
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
221
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
222
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
223
 
224
- ### [Dataset Summary](#dataset-summary)
225
 
226
  OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
227
 
228
- ### [Supported Tasks](#supported-tasks)
229
 
230
  OSCAR is mainly inteded to pretrain language models and word represantations.
231
 
232
- ### [Languages](#languages)
233
 
234
  All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
235
 
236
- ## [Dataset Structure](#dataset-structure)
237
 
238
  We show detailed information for all the configurations of the dataset.
239
 
240
- ### [Data Instances](#data-instances)
241
 
242
 
243
  <details>
@@ -5471,14 +5471,14 @@ This example was too long and was cropped:
5471
 
5472
  </details>
5473
 
5474
- ### [Data Fields](#data-fields)
5475
 
5476
  The data fields are the same among all configs.
5477
 
5478
  - `id`: a `int64` feature.
5479
  - `text`: a `string` feature.
5480
 
5481
- ### [Data Splits Sample Size](#data-splits-sample-size)
5482
 
5483
 
5484
  <details>
@@ -5657,9 +5657,9 @@ The data fields are the same among all configs.
5657
 
5658
  </details>
5659
 
5660
- ## [Dataset Creation](#dataset-creation)
5661
 
5662
- ### [Curation Rationale](#curation-rationale)
5663
 
5664
  OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
5665
 
@@ -5667,7 +5667,7 @@ The order of operations is more or less the same as in the fastText pre-processi
5667
 
5668
  Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
5669
 
5670
- ### [Source Data](#source-data)
5671
 
5672
  [Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
5673
 
@@ -5675,35 +5675,35 @@ Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, w
5675
 
5676
  To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
5677
 
5678
- ### [Annotations](#annotations)
5679
 
5680
  The dataset does not contain any additional annotations.
5681
 
5682
- ### [Personal and Sensitive Information](#personal-and-sensitive-information)
5683
 
5684
  Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
5685
 
5686
- ## [Considerations for Using the Data](#considerations-for-using-the-data)
5687
 
5688
- ### [Social Impact of Dataset](#social-impact-of-dataset)
5689
 
5690
  OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
5691
 
5692
- ### [Discussion of Biases](#discussion-of-biases)
5693
 
5694
  OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
5695
 
5696
- ### [Other Known Limitations](#other-known-limitations)
5697
 
5698
  The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
5699
 
5700
- ## [Additional Information](#additional-information)
5701
 
5702
- ### [Dataset Curators](#dataset-curators)
5703
 
5704
  The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
5705
 
5706
- ### [Licensing Information](#licensing-information)
5707
 
5708
  These data are released under this licensing scheme
5709
  We do not own any of the text from which these data has been extracted.
@@ -5718,7 +5718,7 @@ The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît S
5718
 
5719
  We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
5720
 
5721
- ### [Citation Information](#citation-information)
5722
 
5723
  ```
5724
  @inproceedings{ortiz-suarez-etal-2020-monolingual,
 
214
  - [Citation Information](#citation-information)
215
  - [Contributions](#contributions)
216
 
217
+ ## Dataset Description
218
 
219
  - **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
220
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
221
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
222
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
223
 
224
+ ### Dataset Summary
225
 
226
  OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
227
 
228
+ ### Supported Tasks
229
 
230
  OSCAR is mainly inteded to pretrain language models and word represantations.
231
 
232
+ ### Languages
233
 
234
  All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
235
 
236
+ ## Dataset Structure
237
 
238
  We show detailed information for all the configurations of the dataset.
239
 
240
+ ### Data Instances
241
 
242
 
243
  <details>
 
5471
 
5472
  </details>
5473
 
5474
+ ### Data Fields
5475
 
5476
  The data fields are the same among all configs.
5477
 
5478
  - `id`: a `int64` feature.
5479
  - `text`: a `string` feature.
5480
 
5481
+ ### Data Splits Sample Size
5482
 
5483
 
5484
  <details>
 
5657
 
5658
  </details>
5659
 
5660
+ ## Dataset Creation
5661
 
5662
+ ### Curation Rationale
5663
 
5664
  OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
5665
 
 
5667
 
5668
  Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
5669
 
5670
+ ### Source Data
5671
 
5672
  [Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
5673
 
 
5675
 
5676
  To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
5677
 
5678
+ ### Annotations
5679
 
5680
  The dataset does not contain any additional annotations.
5681
 
5682
+ ### Personal and Sensitive Information
5683
 
5684
  Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
5685
 
5686
+ ## Considerations for Using the Data
5687
 
5688
+ ### Social Impact of Dataset
5689
 
5690
  OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
5691
 
5692
+ ### Discussion of Biases
5693
 
5694
  OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
5695
 
5696
+ ### Other Known Limitations
5697
 
5698
  The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
5699
 
5700
+ ## Additional Information
5701
 
5702
+ ### Dataset Curators
5703
 
5704
  The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
5705
 
5706
+ ### Licensing Information
5707
 
5708
  These data are released under this licensing scheme
5709
  We do not own any of the text from which these data has been extracted.
 
5718
 
5719
  We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
5720
 
5721
+ ### Citation Information
5722
 
5723
  ```
5724
  @inproceedings{ortiz-suarez-etal-2020-monolingual,
oscar.py CHANGED
@@ -20,11 +20,13 @@ from __future__ import absolute_import, division, print_function
20
 
21
  import collections
22
  import gzip
23
- import logging
24
 
25
  import datasets
26
 
27
 
 
 
 
28
  _DESCRIPTION = """\
29
  The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus \
30
  obtained by language classification and filtering of the Common Crawl corpus \
@@ -353,7 +355,7 @@ class Oscar(datasets.GeneratorBasedBuilder):
353
  id_ = 0
354
  current_lines = []
355
  for filepath in filepaths:
356
- logging.info("generating examples from = %s", filepath)
357
  with gzip.open(filepath, "rt") as f:
358
  for line in f:
359
  if len(line.strip()) > 0:
 
20
 
21
  import collections
22
  import gzip
 
23
 
24
  import datasets
25
 
26
 
27
+ logger = datasets.logging.get_logger(__name__)
28
+
29
+
30
  _DESCRIPTION = """\
31
  The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus \
32
  obtained by language classification and filtering of the Common Crawl corpus \
 
355
  id_ = 0
356
  current_lines = []
357
  for filepath in filepaths:
358
+ logger.info("generating examples from = %s", filepath)
359
  with gzip.open(filepath, "rt") as f:
360
  for line in f:
361
  if len(line.strip()) > 0: