system HF staff commited on
Commit
dab66aa
1 Parent(s): 02fc876

Update files from the datasets library (from 1.9.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.9.0

Files changed (2) hide show
  1. README.md +15 -0
  2. oscar.py +1 -1
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  annotations_creators:
3
  - no-annotation
4
  language_creators:
@@ -6329,16 +6330,30 @@ Filtering and cleaning processes at line level are done before feeding each line
6329
 
6330
  ### Source Data
6331
 
 
 
6332
  [Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
6333
 
6334
  Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
6335
 
6336
  To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
6337
 
 
 
 
 
6338
  ### Annotations
6339
 
6340
  The dataset does not contain any additional annotations.
6341
 
 
 
 
 
 
 
 
 
6342
  ### Personal and Sensitive Information
6343
 
6344
  Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
1
  ---
2
+ pretty_name: OSCAR
3
  annotations_creators:
4
  - no-annotation
5
  language_creators:
6330
 
6331
  ### Source Data
6332
 
6333
+ #### Initial Data Collection and Normalization
6334
+
6335
  [Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
6336
 
6337
  Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
6338
 
6339
  To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
6340
 
6341
+ #### Who are the source language producers?
6342
+
6343
+ The data comes from multiple web pages in a large variety of languages.
6344
+
6345
  ### Annotations
6346
 
6347
  The dataset does not contain any additional annotations.
6348
 
6349
+ #### Annotation process
6350
+
6351
+ N/A
6352
+
6353
+ #### Who are the annotators?
6354
+
6355
+ N/A
6356
+
6357
  ### Personal and Sensitive Information
6358
 
6359
  Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
oscar.py CHANGED
@@ -355,7 +355,7 @@ class Oscar(datasets.GeneratorBasedBuilder):
355
  current_lines = []
356
  for filepath in filepaths:
357
  logger.info("generating examples from = %s", filepath)
358
- with gzip.open(filepath, "rt", encoding="utf-8") as f:
359
  for line in f:
360
  if len(line.strip()) > 0:
361
  current_lines.append(line)
355
  current_lines = []
356
  for filepath in filepaths:
357
  logger.info("generating examples from = %s", filepath)
358
+ with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8") as f:
359
  for line in f:
360
  if len(line.strip()) > 0:
361
  current_lines.append(line)