cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited "Nils Brügger, Ian Milligan – Aarhus University, Denmark; University of Waterloo, Canada",The SAGE Handbook of Web History,https://us.sagepub.com/en-us/nam/the-sage-handbook-of-web-history/book252251,papers,20190101Z00:00:00,,,"Aarhus University, Denmark; University of Waterloo, Canada","web-science, web history",,,,, "Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, Edouard Grave – Facebook AI",CCNet: Extracting high quality monolingual datasets from web crawl data,https://arxiv.org/abs/1911.00359,papers,20190101Z00:00:00,,"Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.",Facebook AI,"nlp/corpus-construction, nlp/web-as-corpus, nlp/low-resource-language","[about https://github.com/facebookresearch/cc_net] In this paper, we present a data collection pipeline that allows to gather massive monolingual corpora of high quality in a variety of languages, including many low-resource ones. The principles of our pipeline are general and we show the results of its application to data collected by the Common Crawl project.¹ Common Crawl is a massive non-curated dataset of webpages in many languages, mixed together in temporal snapshots of the web.",,CCNet,, "A. Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, Ilya Sutskever – OpenAI, San Francisco, California, United States",Language models are unsupervised multitask learners,https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe,papers,20190101Z00:00:00,,,"OpenAI, San Francisco, California, United States",cc-cited-not-used,"A promising source of diverse and nearly unlimited text is web scrapes such as Common Crawl. While these archives are many orders of magnitude larger than current language modeling datasets, they have significant data quality issues. Trinh & Le (2018) used Common Crawl in their work on commonsense reasoning but noted a large amount of documents “whose content are mostly unintelligible”. We ob-served similar data issues in our initial experiments with Common Crawl. Trinh & Le (2018)’s best results were achieved using a small subsample of Common Crawl which included only documents most similar to their target dataset,the Winograd Schema Challenge. While this is a pragmatic approach to improve performance on a specific task, we want to avoid making assumptions about the tasks to be performed ahead of time.Instead, we created a new web scrape which emphasizes document quality. To do this we only scraped web pages which have been curated/filtered by humans. Manually filtering a full web scrape would be exceptionally expensive so as a starting point, we scraped all outbound links from Reddit, a social media platform, which received at least 3 karma. This can be thought of as a heuristic indicator for whether other users found the link interesting, educational, or just funny. The resulting dataset, WebText, contains the text subsetof these 45 million links.",,,, "Pedro Javier Ortiz Suárez, Benoît Sagot, Laurent Romary – Inria, Paris, France; Sorbonne University, Paris, France",Asynchronous pipeline for processing huge corpora on medium to low resource infrastructures,https://hal.inria.fr/hal-02148693,papers,20190101Z00:00:00,,,"Inria, Paris, France; Sorbonne University, Paris, France",nlp/corpus-construction,"We use the November 2018 snapshot which surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header. From now on, when we mention the “Common Crawl” corpus, we refer to this particular November 2018 snapshot.",CC-MAIN-2018-47 (WET),OSCAR,, "Dominik Mottl – Hochschule Darmstadt, Germany",Multi-Label Branchenklassifikation von Web-Texten,https://fbmn.h-da.de/uploads/Themen/WS18_thesis_mottl.pdf,papers,20190101Z00:00:00,,,"Hochschule Darmstadt, Germany","nlp/NER, entity-linking",NER of company names and linking to DBpedia performed on English texts in 712 WET files of November 2018 crawl (CC-MAIN-2018-47) using cc-pyspark.,,,, "Sebastian Nagel – Common Crawl, USA",Accessing WARC files via SQL,https://digital.library.unt.edu/ark:/67531/metadc1608961/,papers,20190101Z00:00:00,,,"Common Crawl, USA","web-archiving, SQL, Parquet",,cc-index-table,,, "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov – Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA; Facebook AI",RoBERTa: A Robustly Optimized BERT Pretraining Approach,https://arxiv.org/abs/1907.11692,papers,20190101Z00:00:00,,,"Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA; Facebook AI","nlp/corpus-construction, nlp/language-model","We find that BERT was significantly undertrained and propose an improved recipe for training BERT models, which we call RoBERTa, that can match or exceed the performance of all of the post-BERT methods. Our modifications are simple, they include: (1) training the model longer, with bigger batches, over more data; (2) removing the next sentence prediction objective; (3) training on longer sequences; and (4) dynamically changing the masking pattern applied to the training data. We also collect a large new dataset (CC-NEWS) of comparable size to other privately used datasets, to better control for training set size effects. [...] CC-NEWS, which we collected from the English portion of the CommonCrawl News dataset (Nagel, 2016). The data contains 63 million English news articles crawled between September 2016 and February 2019. (76GB after filtering).⁴ [⁴ We use news-please (Hamborg et al.,2017) to collect and extract CC-NEWS. CC-NEWS is similar to the REALNEWS dataset described in Zellers et al. (2019).]",CC-NEWS,CC-NEWS-RoBERTa,, "Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi – University of Washington, USA; Allen Institute for Artificial Intelligence, USA",Defending against neural fake news,http://papers.nips.cc/paper/9106-defending-against-neural-fake-news.pdf,papers,20190101Z00:00:00,,,"University of Washington, USA; Allen Institute for Artificial Intelligence, USA","nlp/language-model, nlp/fake-news-detection, nlp/text-classification, misinformation, disinformation","Dataset. We present RealNews, a large corpus of news articles from Common Crawl. Training Grover requires a large corpus of news articles with metadata, but none currently exists. Thus, we construct one by scraping dumps from Common Crawl, limiting ourselves to the 5000 news domains indexed by Google News. We used the Newspaper Python library to extract the body and meta-data from each article. News from Common Crawl dumps from December 2016 through March 2019 were used as training data; articles published in April 2019 from the April 2019 dump were used for evaluation. After deduplication, RealNews is 120 gigabytes without compression. [...] Obtaining the data required through Common Crawl cost \$10k in AWS credits and can be massively parallelized over many CPUs. [...]",,Grover-RealNews,, "Giulio Ermanno Pibiri, Matthias Petri, Alistair Moffat – University of Melbourne, Australia; University of Pisa, Italy; ISTI-CNR, Pisa, Italy",Fast Dictionary-Based Compression for Inverted Indexes,https://dl.acm.org/citation.cfm?id=3290962,papers,20190101Z00:00:00,,"Dictionary-based compression schemes provide fast decoding operation, typically at the expense of reduced compression effectiveness compared to statistical or probability-based approaches. In this work, we apply dictionary-based techniques to the compression of inverted lists, showing that the high degree of regularity that these integer sequences exhibit is a good match for certain types of dictionary methods, and that an important new trade-off balance between compression effectiveness and compression efficiency can be achieved. Our observations are supported by experiments using the document-level inverted index data for two large text collections, and a wide range of other index compression implementations as reference points. Those experiments demonstrate that the gap between efficiency and effectiveness can be substantially narrowed.","University of Melbourne, Australia; University of Pisa, Italy; ISTI-CNR, Pisa, Italy","information-retrieval/search-engine, information-retrieval/inverted-index","We use the standard Gov2 collection containing 426 GiB of text; and CCNEWS, an English subset of the freely available NEWS subset of the CommonCrawl¹ [¹http://commoncrawl.org/2016/10/news-dataset-available/], consisting of news articles in the period 09/01/16 to 30/03/18, following the methodology of Petri and Moffat [28].",CC-NEWS,,, "Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin – Facebook AI",CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB,https://arxiv.org/abs/1911.04944,papers,20190101Z00:00:00,"Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences","We show that margin-based bitext mining in a multilingual sentence space can be applied to monolingual corpora of billions of sentences. We are using ten snapshots of a curated common crawl corpus (Wenzek et al., 2019), totalling 32.7 billion unique sentences. Using one unified approach for 38 languages, we were able to mine 4.5 billions parallel sentences, out of which 661 million are aligned with English. 20 language pairs have more then 30 million parallel sentences, 112 more then 10 million, and most more than one million, including direct alignments between many European or Asian languages.",Facebook AI,"nlp/corpus-construction, nlp/parallel-corpus, nlp/machine-translation","The curated Common Crawl corpus¶ In this work, we propose to mine parallel sentences from the Web, by using the data released by the Common Crawl project.[⁵https://commoncrawl.org/] Each month, a snapshot of the Web containing terabytes of web pages in various languages is obtained by randomly exploring URLs. We start by applying some preprocessing steps to the raw text data, following the pipeline introduced by Wenzek et al. (2019) and leading to the CCNet dataset. The first step is to deduplicate the data at the paragraph level, as the original crawls contain up to 70% of duplicated data. This preprocessing removes low quality content, such as boilerplate, navigation menus or cookie warnings. The second step of the pipeline is to identify the language of each document, using fastText6 (Grave et al., 2018). This language identifier uses a linear classifier with character n-gram features, and can recognize up to 176 languages. Finally, the last step of the preprocessing is to filter low quality content by training a language model on Wikipedia, and only keeping documents with a low perplexity score. We refer the reader to Wenzek et al. (2019) for more details about this pre- processing pipeline. In Figure 1, we report the number of unique sentences obtained after preprocessing ten snapshots from Common Crawl. We currently process 38 languages. The English Web content is abundant and we used only one snapshot.",,CCMatrix,, "Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc'Aurelio Ranzato, Arthur Szlam – Facebook AI Research; Harvard University, USA",Real or Fake? Learning to Discriminate Machine from Human Generated Text,https://arxiv.org/abs/1906.03351,papers,20190101Z00:00:00,,,"Facebook AI Research; Harvard University, USA",nlp/text-classification,"CCNews: We collect a de-duplicated subset of the English portion of the CommonCrawl news dataset (Nagel, 2016) [Sebastian Nagel. Cc-news. http://web.archive.org/save/http://commoncrawl. org/2016/10/news-dataset-available/, 2016.], which totals around 16 Billion words.",CC-NEWS,"CCNews (Bakhtin, et al. 2019)",, "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le – Carnegie Mellon University, Google AI Brain Team",XLNet: Generalized Autoregressive Pretraining for Language Understanding,https://arxiv.org/abs/1906.08237,papers,20190101Z00:00:00,,,"Carnegie Mellon University, Google AI Brain Team",nlp/transformer-language-model,"Following BERT [10], we use the BooksCorpus [40] and English Wikipedia as part of our pretraining data, which have 13GB plain text combined. In addition, we include Giga5 (16GB text) [26], ClueWeb 2012-B (extended from 5]), and Common Crawl [6] for pretraining. We use heuristics to aggressively filter out short or low-quality articles for ClueWeb 2012-B and Common Crawl, which results in 19GB and 110GB text respectively.",,,, "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov – Facebook AI",Unsupervised Cross-lingual Representation Learning at Scale,https://arxiv.org/abs/1911.02116,papers,20190101Z00:00:00,"Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences","This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross- lingual transfer tasks. We train a Transformer- based masked language model on one hundred languages, using more than two terabytes of fil- tered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6\% average accu- racy on XNLI, +13\% average F1 score on MLQA, and +2.4\% F1 score on NER. XLM-R performs particularly well on low-resource lan- guages, improving 15.7\% in XNLI accuracy for Swahili and 11.4\% for Urdu over previ- ous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and ca- pacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per- language performance; XLM-R is very compet- itive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code, data and models publicly available.",Facebook AI,"nlp/corpus-construction, nlp/web-as-corpus, nlp/language-model","Following Wenzek et al. (2019) 2, we build a clean CommonCrawl Corpus in 100 languages. [...] In this work, we introduced XLM-R, our new state of the art multilingual masked language model trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages.",,CC-100,CCNet,