cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited "Frankie Robertson, Jarkko Lagus, Kaisla Kajava – University of Jyväskylä, Finland; University of Helsinki, Finland",A COVID-19 news coverage mood map of Europe,https://www.aclweb.org/anthology/2021.hackashop-1.15,papers,20210101Z00:00:00,,"We present a COVID-19 news dashboard which visualizes sentiment in pandemic news coverage in different languages across Europe. The dashboard shows analyses for positive/neutral/negative sentiment and moral sentiment for news articles across countries and languages. First we extract news articles from news-crawl. Then we use a pre-trained multilingual BERT model for sentiment analysis of news article headlines and a dictionary and word vectors -based method for moral sentiment analysis of news articles. The resulting dashboard gives a unified overview of news events on COVID-19 news overall sentiment, and the region and language of publication from the period starting from the beginning of January 2020 to the end of January 2021.","University of Jyväskylä, Finland; University of Helsinki, Finland","nlp/corpus-construction, nlp/sentiment-analysis",,CC-NEWS,,, "Jesse Dodge, Maarten Sap, Ana Marasovic, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Matt Gardner – Paul G. Allen School of Computer Science & Engineering, University of Washington, USA; Allen Institute for Artificial Intelligence, USA",Documenting large webtext corpora: a case study on the Colossal Clean Crawled Corpus,https://arxiv.org/abs/2104.08758,papers,20210101Z00:00:00,,"Large language models have led to remarkable progress on many NLP tasks, and researchers are turning to ever-larger text corpora to train them. Some of the largest corpora available are made by scraping significant portions of the internet, and are frequently introduced with only minimal documentation. In this work we provide some of the first documentation for the Colossal Clean Crawled Corpus (C4; Raffel et al., 2020), a dataset created by applying a set of filters to a single snapshot of Common Crawl. We begin by investigating where the data came from, and find a significant amount of text from unexpected sources like patents and US military websites. Then we explore the content of the text itself, and find machine-generated text (e.g., from machine translation systems) and evaluation examples from other benchmark NLP datasets. To understand the impact of the filters applied to create this dataset, we evaluate the text that was removed, and show that blocklist filtering disproportionately removes text from and about minority individuals. Finally, we conclude with some recommendations for how to created and document web-scale datasets from a scrape of the internet.","Paul G. Allen School of Computer Science & Engineering, University of Washington, USA; Allen Institute for Artificial Intelligence, USA","nlp/corpus-construction, nlp/language-model",,CC-MAIN-2019-18 (WET),"Tensorflow-C4, Huggingface-Allenai-C4-English",, "Isaac Caswell, Julia Kreutzer, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera, Annette Rios, Isabel Papadimitriou, Salomey Osei, Pedro Javier Ortiz Suárez, Iroro Orife, Kelechi Ogueji, Rubungo Andre Niyongabo, Toan Q. Nguyen, Mathias Müller, André Müller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine Çabuk Ballı, Stella Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, Mofetoluwa Adeyemi – Google Research; Masakhane NLP; Turkic Interlingua; Haverford College; RobotsMali; Intel Labs; University of Zambia; Google; AIMS-AMMI; Inria; University of Zurich; Stanford University; Kwame Nkrumah University of Science and Technology; Sorbonne Université; Niger-Volta LTI; University of Waterloo; University of Electronic Science and Technology of China; University of Notre Dame; Bayero University Kano; University of South Florida; Hugging Face; Jacobs University Bremen; University of Moratuwa; EleutherAI; Obafemi Awolowo University; University of Ibadan; Instadeep; University of Maryland; Defence Space Administration Abuja",Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets,https://arxiv.org/abs/2103.12028,papers,20210101Z00:00:00,,"With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, web-mined text datasets covering hundreds of languages. However, to date there has been no systematic analysis of the quality of these publicly available datasets, or whether the datasets actually contain content in the languages they claim to represent. In this work, we manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4), and audit the correctness of language codes in a sixth (JW300). We find that lower-resource corpora have systematic issues: at least 15 corpora are completely erroneous, and a significant fraction contains less than 50\% sentences of acceptable quality. Similarly, we find 82 corpora that are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-speakers of the languages in question, and supplement the human judgements with automatic analyses. Inspired by our analysis, we recommend techniques to evaluate and improve multilingual corpora and discuss the risks that come with low-quality data releases.",Google Research; Masakhane NLP; Turkic Interlingua; Haverford College; RobotsMali; Intel Labs; University of Zambia; Google; AIMS-AMMI; Inria; University of Zurich; Stanford University; Kwame Nkrumah University of Science and Technology; Sorbonne Université; Niger-Volta LTI; University of Waterloo; University of Electronic Science and Technology of China; University of Notre Dame; Bayero University Kano; University of South Florida; Hugging Face; Jacobs University Bremen; University of Moratuwa; EleutherAI; Obafemi Awolowo University; University of Ibadan; Instadeep; University of Maryland; Defence Space Administration Abuja,"nlp/corpus-construction, nlp/web-as-corpus, nlp/parallel-corpus, nlp/low-resource-language","We selected the corpora for their multilinguality and the inclusion of understudied languages in NLP. With the exception of WikiMatrix and Paracrawl, all corpora are derived from CommonCrawl, and distinguish themselves by the choice of filtering methods, LangID and automatic alignment technology.",,"CCAligned-2020, Tensorflow-C4-Multilingual, OSCAR",, "P. Kalaharsha, B. M. Mehtre – Institute for Development and Research in Banking Technology (IDRBT), Hyderabad, Indiab; School of Computer Science and Information Sciences (SCIS), University of Hyderabad, Hyderabad, India",Detecting Phishing Sites -- An Overview,https://arxiv.org/abs/2103.12739,papers,20210101Z00:00:00,,,"Institute for Development and Research in Banking Technology (IDRBT), Hyderabad, Indiab; School of Computer Science and Information Sciences (SCIS), University of Hyderabad, Hyderabad, India","computer-security/internet-security, computer-security/malicious-domain-detection",Alexa and Common crawl contains names of the legitimate sites which are likely to be used for phishing [62][63]. [63:http://index.commoncrawl.org],,,, "Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel – Google Research",mT5: A massively multilingual pre-trained text-to-text transformer,https://arxiv.org/abs/2010.11934,papers,20210101Z00:00:00,,,Google Research,"nlp/corpus-construction, nlp/web-as-corpus, nlp/language-model","[...] we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages.",CC-MAIN-2019-18 (WET),Tensorflow-C4-Multilingual (mC4),, "Bilal Tahir, Muhammad Amir Mehmood – University of Engineering and Technology, Lahore, Pakistan",Corpulyzer: A Novel Framework for Building Low Resource Language Corpora,https://ieeexplore.ieee.org/document/9316706,papers,20210101Z00:00:00,,,"University of Engineering and Technology, Lahore, Pakistan","nlp/corpus-construction, nlp/web-as-corpus, nlp/low-resource-language","Leveraging dataset from Common Crawl Corpus (CCC), first, we prepare a list of seed URLs by filtering the Urdu language webpages. Next, we use Corpulyzer to crawl the World-Wide-Web (WWW) over a period of four years (2016-2020). We build Urdu web corpus “UrduWeb20” that consists of 8.0 million Urdu webpages crawled from 6,590 websites. [...] building a corpus of a low-resource language from CCC is a challenging task due to: i) sampling techniques, ii) filtering of webpages of target languages, and iii) full parsing of CCC. [...] we build upon our previous approach [40] where we developed a dataset consisting of 1.28 million Urdu webpages from CCC 2016 dataset. [...] In general, CCC release meta-data as well as the crawled content where former is lightweight and easier to analyze and latter requires huge bandwidth to download and store the data. As an alternate strategy, we build three datasets using CC released data: i) CC-meta, ii) CC-Urdu-meta, and ii) CC-Urdu-crawl. First, we build CC-meta dataset to explore the impact of URL selection and crawling strategies of Common Crawl in general. This dataset consists of meta-information of 29.1 billion URLs in 11 common crawl releases from September2018 – June2019. This meta-information of each release is available in the form of compressed files (>200GB size) with information of webpage URL, MIME-type, and charset etc [94]. Next, we build CC-Urdu-meta dataset by filtering out Urdu webpages. We note that from August 2018 onward releases [95], CC also provides ISO6 language code of top three languages present in webpages after parsing HTML of the webpage from CLD2.",,,, "Alexandra Sasha Luccioni, Joseph D. Viviano – Université de Montréal, Canada; Mila Québec AI Institute, Canada",What's in the Box? An Analysis of Undesirable Content in the Common Crawl Corpus,https://arxiv.org/abs/2105.02732,papers,20210101Z00:00:00,,,"Université de Montréal, Canada; Mila Québec AI Institute, Canada","ai/ethics-of-machine-learning, nlp/corpus-construction, nlp/text-corpora","Given its size, both downloading and analyzing the Common Crawl are time-consuming and costly endeavors. The most recent version of the Common Crawl [https://commoncrawl.org/2020/12/nov-dec-2020-crawl-archive-now-available/], dating from November/December 2020, has 2.6 billion web pages in raw text format, saved in ‘shards’ each containing of tens of thousands of pages. Given our hardware constraints, we chose to focus on a subset of the corpus, randomly sampling 1% of the files it contains, roughly amounting toroughly 81 GB of textual content or 5,835,339 webpages in total, which we analyzed in terms of hate speech, adult content, and efficacy of perplexity-based filtering. All code used in these analysis are publicly available¹ [¹https://github.com/josephdviviano/whatsinthebox]. [...] We found that the three approaches compared suggest similar proportions of websites containing hate speech: 5.24% of websites from our sample were flagged by DELIMIT, 4.02% by HateSonar,and 6.38% by the n-gram approach². [²We are conscious of the high false positive rate of n-gram approaches and therefore only consider sites to be flagged if they contain 3 or more n-grams from the list.] Qualitative analysis of a sample of sites flagged by each approach showed that while n-grams picked up on racial slurs, HateSonar picked up on debates about racial supremacy and conspiracy theories. Many of the sites that DELIMIT flagged were adult content with mentions of violent acts towards specific ethnic groups, illustrating the fine line between sexual violence and hate speech. [...] While it can be argued that the Common Crawl corpus is an accurate portrayal of the discourse of modern society – which includes sexual content, hate speech, racial biases, and gender biases – we believe that it is up for debate whether this discourse is the one that we, as a community, want to use to train the models that translate our texts, influence our search results and answer our questions. Notably, the Common Crawl overrepresents those populations that are avid users of the internet: younger, English-speaking individuals from developed countries, [...]",,,, "Maik Fröbe, Janek Bevendorff, Lukas Gienapp, Michael Völske, Benno Stein, Martin Potthast, Matthias Hagen – Martin-Luther-Universität Halle-Wittenberg, Germany; Bauhaus-Universität Weimar, Germany; Leipzig University, Germany",CopyCat: Near-Duplicates within and between the ClueWeb and the Common Crawl,https://dl.acm.org/doi/10.1145/3404835.3463246,papers,20210101Z00:00:00,,,"Martin-Luther-Universität Halle-Wittenberg, Germany; Bauhaus-Universität Weimar, Germany; Leipzig University, Germany",ir/duplicate-detection,,"CC-MAIN-2015-11, CC-MAIN-2017-04",,, "Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy – EleutherAI",The Pile: An 800GB Dataset of Diverse Text for Language Modeling,https://arxiv.org/abs/2101.00027,papers,20210101Z00:00:00,,"Recent work has demonstrated that increased training dataset diversity improves general cross-domain knowledge and downstream generalization capability for large-scale language models. With this in mind, we present the Pile: an 825 GiB English text corpus targeted at training large-scale language models. The Pile is constructed from 22 diverse high-quality subsets—both existing and newly constructed—many of which derive from academic or professional sources. Our evaluation of the untuned performance of GPT-2 and GPT-3 on the Pile shows that these models struggle on many of its components, such as academic writing. Conversely, models trained on the Pile improve significantly over both Raw CC and CC-100 on all components of the Pile, while improving performance on downstream evaluations. Through an in-depth exploratory analysis, we document potentially concerning aspects of the data for prospective users. We make publicly available the code used in its construction.¹ [¹https://pile.eleuther.ai/]",EleutherAI,"nlp/corpus-construction, nlp/text-corpora, nlp/language-model, nlp/text-corpora/legal-aspects","The growing need for data in language modeling has caused most existing large-scale language models to turn to the Common Crawl for most or all of their data (Brown et al., 2020; Raffel et al., 2019). While training on the Common Crawl has been effective, recent work has shown that dataset diversity leads to better downstream generalization capability (Rosset, 2019). [...] we also introduce a new filtered subset of Common Crawl, Pile-CC, with improved extraction quality. [...] 2.1 Pile-CC Common Crawl is a collection of website crawls from 2008 onwards, including raw web pages, metadata and text extractions. Due to the raw nature of the dataset, Common Crawl has the advantage of including text from diverse domains, but at the cost of varying quality data. Due to this, use of Common Crawl typically necessitates well-designed extraction and filtering. Our Common Crawl-based dataset, Pile-CC, uses jusText (Endrédy and Novák, 2013) on Web Archive files (raw HTTP responses including page HTML) for extraction, which yields higher quality output than directly using the WET files (extracted plain-text). [...] Surprisingly, raw Common Crawl performs better on the Pile BPB than CC-100, despite losing by a significant margin on LAMBADA and WikiText. We hypothesize that this is due to the perplexity based filtering used in CC-100, where a language model is trained on Wikipedia and all data with a perplexity too high or too low is discarded. This effectively discards any data too similar to or too different from Wikipedia, which severely limits the diversity of the collected data. This result suggests that future work using Common Crawl should take caution with filtering to preserve its diversity.","69 monthly crawls (WARC): CC-MAIN-2013-20 - CC-MAIN-2020-24, cf. https://github.com/leogao2/commoncrawl_downloader/blob/3a7a4a7c33aaee2a45f320f7bc57d0dcd3f3a220/indexes_20200607105929",The-Pile-English,, "Leon Derczynski, Manuel R. Ciosici, Rebekah Baglini, Morten H. Christiansen, Jacob Aarup Dalsgaard, Riccardo Fusaroli, Peter Juel Henrichsen, Rasmus Hvingelby, Andreas Kirkedal, Alex Speed Kjeldsen, Claus Ladefoged, Finn Årup Nielsen, Jens Madsen, Malte Lau Petersen, Jonathan Hvithamar Rystrøm, Daniel Varab – ITU Copenhagen, Denmark; Aarhus University, Denmark; Danish Language Council, Denmark; TV2 Regionerne, Denmark; Karnov Group, Denmark; USC Information Sciences Institute, USA; Alexandra Institute, Denmark; University of Copenhagen, Denmark; Technical University of Denmark; Novo Nordisk, Denmark",The Danish Gigaword Corpus,https://gigaword.dk/,papers,20210101Z00:00:00,,,"ITU Copenhagen, Denmark; Aarhus University, Denmark; Danish Language Council, Denmark; TV2 Regionerne, Denmark; Karnov Group, Denmark; USC Information Sciences Institute, USA; Alexandra Institute, Denmark; University of Copenhagen, Denmark; Technical University of Denmark; Novo Nordisk, Denmark","nlp/corpus-construction, nlp/text-corpora","[...] the Danish section of Common Crawlis plagued by significant amounts of non-Danish content, in part due to the pervasive confusion between Danish and Norwegian Bokmål by highly multilingual language ID classifiers (Haas and Derczynski, 2021). Datasets derived exclusively from Common Crawl also have a bias toward webspeak and content from recent years, leaving models built over them sub-ptimally prepared to process older Danish. Common Crawl’s undirected collection of content often overrepresents some dialects at the expense of other dialects.",,,, "Patrick Dinklage, Jonas Ellert, Johannes Fischer, Florian Kurpicz, Marvin Löbel – TU Dortmund University, Germany",Practical Wavelet Tree Construction,https://doi.org/10.1145/3457197,papers,20210101Z00:00:00,"text indexing, shared memory, external memory, distributed memory, data structures","We present new sequential and parallel algorithms for wavelet tree construction based on a new bottom-up technique. This technique makes use of the structure of the wavelet trees—refining the characters represented in a node of the tree with increasing depth—in an opposite way, by first computing the leaves (most refined), and then propagating this information upwards to the root of the tree. We first describe new sequential algorithms, both in RAM and external memory. Based on these results, we adapt these algorithms to parallel computers, where we address both shared memory and distributed memory settings.In practice, all our algorithms outperform previous ones in both time and memory efficiency, because we can compute all auxiliary information solely based on the information we obtained from computing the leaves. Most of our algorithms are also adapted to the wavelet matrix, a variant that is particularly suited for large alphabets.","TU Dortmund University, Germany","data-structures, text-indexing","Common Crawl. The Common Crawl corpus contains websites that are crawled by the Common Crawl Project. We use the WET files, which contain only the textual data of the crawled websites, i. e., no HTML tags. We also removed the meta information added by the Commoncrawl corpus. To be more precise, we used the following WET files: crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/wet/CC-MAIN-20190215183319-20190215205319-#ID.warc.wet, where #ID is in the range from 00000 to 00600. As we only care for the text, we removed the WARC meta information, i. e., each line consisting of WARC/1.0 and the following eight lines. CommonCrawl is the concatenation of all files sorted in ascending order by their ID.",CC-MAIN-2019-09 (600 WET files),,, "Jay A. Olson, Johnny Nahas, Denis Chmoulevitch, Simon J. Cropper, Margaret E. Webb – Department of Psychology, Harvard University, Cambridge, MA, USA; Department of Psychology, McGill University, Montreal, QC, Canada; Melbourne School of Psychological Sciences, University of Melbourne, Australia",Naming unrelated words predicts creativity,https://www.pnas.org/content/118/25/e2022340118,papers,20210101Z00:00:00,,"Many traditional measures of creativity require time-intensive and subjective scoring procedures. Their scores are relative to the specific sample, which makes multicultural or international assessments difficult. Our results show that a shorter and simpler task with automatic and objective scoring may be at least as reliable at measuring verbal creativity. This finding enables assessments across larger and more diverse samples with less bias.Several theories posit that creative people are able to generate more divergent ideas. If this is correct, simply naming unrelated words and then measuring the semantic distance between them could serve as an objective measure of divergent thinking. To test this hypothesis, we asked 8,914 participants to name 10 words that are as different from each other as possible. A computational algorithm then estimated the average semantic distance between the words; related words (e.g., cat and dog) have shorter distances than unrelated ones (e.g., cat and thimble). We predicted that people producing greater semantic distances would also score higher on traditional creativity measures. In Study 1, we found moderate to strong correlations between semantic distance and two widely used creativity measures (the Alternative Uses Task and the Bridge-the-Associative-Gap Task). In Study 2, with participants from 98 countries, semantic distances varied only slightly by basic demographic variables. There was also a positive correlation between semantic distance and performance on a range of problems known to predict creativity. Overall, semantic distance correlated at least as strongly with established creativity measures as those measures did with each other. Naming unrelated words in what we call the Divergent Association Task can thus serve as a brief, reliable, and objective measure of divergent thinking.The data and algorithm code have been deposited in the Open Science Framework (https://osf.io/vjazn/).","Department of Psychology, Harvard University, Cambridge, MA, USA; Department of Psychology, McGill University, Montreal, QC, Canada; Melbourne School of Psychological Sciences, University of Melbourne, Australia","psychology/creativity, psychology/computational-scoring, nlp/word-embeddings",We chose the GloVe algorithm and the Common Crawl corpus [...],,,GloVe-word-embeddings, "Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer – Facebook AI; University of Washington, USA",HTLM: Hyper-Text Pre-Training and Prompting of Language Models,https://arxiv.org/abs/2107.06955,papers,20210101Z00:00:00,,"We introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advan- tages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-task- adjacent supervision (e.g. class and id at- tributes often encode document category information), and (3) it allows for new structured prompting that follows the established seman- tics of HTML (e.g. to do zero-shot summarization by infilling tags for a webpage that contains the input text). We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels. HTLM matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and fine-tuning for classification benchmarks, while also setting new state-of-the-art performance levels for zero-shot summarization. We also find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs, and that HTLM is highly effective at auto- prompting itself, by simply generating the most likely hypertext formatting for any available training data. We will release all code and models to support future HTLM research.","Facebook AI; University of Washington, USA","nlp/corpus-construction, nlp/text-corpora, nlp/transformer-language-model",Our HyperTextLanguageModel (HTLM) is trained on 23TB of simplified HTML which we automatically extract from common crawl dumps [...],,,, "Julien Abadji, Pedro Javier Ortiz Suárez, Laurent Romary, Benoît Sagot – Inria, Paris, France; Sorbonne Université, Paris, France",Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus,https://ids-pub.bsz-bw.de/frontdoor/deliver/index/docId/10468/file/Abadji_Suarez_Romary_Ungoliant_2021.pdf,papers,20210101Z00:00:00,,"Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.","Inria, Paris, France; Sorbonne Université, Paris, France","nlp/corpus-construction, nlp/text-corpora",,,,, "Guy Grossman, Stephanie Zonszein – University of Pennsylvania, USA","Voted In, Standing Out: Public Response to Immigrants' Political Accession",https://osf.io/xd4wk/,papers,20210101Z00:00:00,,"In a context of nativism and poor representation of immigrant-origin ethnic minori- ties, what is the reaction of the host society when immigrants succeed at integration in political institutions? Building on threat theory—which links minorities’ political power to hostility against minoritized groups—we argue that when they win political office, immigrants pose a threat to natives’ dominant position. This in turn triggers a hostile reaction from a violent-prone fringe, the mass public and the elites. We test these dynamics across the last four UK general elections, using hate crime police records, public opinion data, and text data from over 500,000 news articles from 350 na- tional and local newspapers. We identify the public’s hostile reactions with a regression discontinuity design that leverages close election results between minority-immigrant and dominant group candidates. Our findings suggest a public backlash against ethnic minority immigrants’ integration into majority settings.","University of Pennsylvania, USA","political science, sociology, political integration of immigrants, ethnic minorities","News articles were extracted from Common Crawl, ethnic background of candidates is constructed by the authors, and constituency characteristics from 2001 and 2011 UK Decennial Census. [...] Then, to obtain the articles published by each of these newspapers, we looked up the URLs in Common Crawl (an open repository of web crawl data containing a snapshot of every web page at the moment of the crawl). Particularly in the Index for 2020-16 crawl, the most recent crawl at that moment. We retrieved the WARC (Web ARChive format) records for each crawled page from the newspaper, and extracted the pages’ HTML. From the HTML, we extracted the text, title, and byline using the Python package readabiliPy; the publication date using the Python library htmldate; the location by tokenizing the article with CoreNLP, and looking for tokens which match place names in the Index of Place Names in Great Britain, and mapping to the corresponding constituency. Figure D.1 presents the geographical coverage of all extracted articles across constituencies.¶ [...] 4.3 Media tone toward migrant groups¶ Data We use data from over 500,000 articles from 350 national, regional and local UK newspapers, covering the general elections from 2010–2019.⁸ This data is from Common Crawl, which is an open repository of web crawl data. We assume that an article refers to a candidate’s ethnic group when three conditions are met: 1) the publication date is on election day and up to 10 months after each general election⁹, 2) the article contains mentions of terms referring to the candidate’s country or nationality of origin, which are extracted with the named entity annotator of CoreNLP and 3) such mentions co-occur in the article with a mention referring to the candidate’s constituency. The constituency is extracted by tokenizing the article with CoreNLP and looking for tokens which match place names in the Index of Place Names in Great Britain, and mapping to the corresponding constituency. Overall, this data includes almost 150,000 mentions from 156 newspapers that meet these three conditions about the candidates’ group. [...] D Newspaper data, computation of media tone measures and validation of key elements Newspaper data We construct the dataset of newspaper articles using the following steps. To determine a comprehensive list of UK newspapers, we first identified a list of seed categories on Wikipedia (WP) (e.g. ’Category:Newspapers_published_in_England’), we took the recursive items of those categories (e.g. ’Category:Newspapers_published_in_England’ > ’Category:Newspapers_published_in_London’), we used WP article properties to filter out articles about non-newspapers (e.g. people, books), and we extracted the newspaper URLs from the WP Infobox using the Python package wptools. With this process we identified a list of UK newspapers URLs containing 337 newspapers in total. Then, to obtain the articles published by each of these newspapers, we looked up the URLs in Common Crawl (an open repository of web crawl data containing a snapshot of every web page at the moment of the crawl). Particularly in the Index for 2020-16 crawl, the most recent crawl at that moment. We retrieved the WARC (Web ARChive format) records for each crawled page from the newspaper, and extracted the pages’ HTML. From the HTML, we extracted the text, title, and byline using the Python package readabiliPy; the publication date using the Python library htmldate; the location by tokenizing the article with CoreNLP, and looking for tokens which match place names in the Index of Place Names in Great Britain, and mapping to the corresponding constituency. Figure D.1 presents the geographical coverage of all extracted articles across constituencies.",CC-MAIN-2020-16,,, "Helen Ngo, João G. M. Araújo, Jeffrey Hui, Nicholas Frosst – Cohere, Toronto, Canada",No News is Good News: A Critique of the One Billion Word Benchmark,https://arxiv.org/abs/2110.12609,papers,20210101Z00:00:00,,"The One Billion Word Benchmark is a dataset derived from the WMT 2011 News Crawl, commonly used to measure language modeling ability in natural language processing. We train models solely on Common Crawl web scrapes partitioned by year, and demonstrate that they perform worse on this task over time due to distributional shift. Analysis of this corpus reveals that it contains several examples of harmful text, as well as outdated references to current events. We suggest that the temporal nature of news and its distribution shift over time makes it poorly suited for measuring language modeling ability, and discuss potential impact and considerations for researchers building language models and evaluation datasets.","Cohere, Toronto, Canada","nlp/language-model, nlp/language-model/perplexity","Common Crawl is a repository of web scrapes of the internet updated annually and is often used as a key data source for language models built on the open web [8, 2, 1]. We train benchmark models on three distinct datasets created by selecting data sampled from different years of Common Crawl: 2013 (the year which lm1b was released), 2016, and 2020. [...] Models which are trained on datasets temporally further removed from the lm1b corpus source (i.e. WMT 2011 News Crawl dataset) exhibit higher perplexity than those trained on datasets which are temporally closer.",,,, Leo Gao – EleutherAI,An Empirical Exploration in Quality Filtering of Text Data,https://arxiv.org/abs/2109.00698,papers,20210101Z00:00:00,,"While conventional wisdom suggests that more aggressively filtering data from low-quality sources like Common Crawl always monotonically improves the quality of training data, we find that aggressive filtering can in fact lead to a decrease in model quality on a wide array of downstream tasks for a GPT-like language model. We speculate that this is because optimizing sufficiently strongly for a proxy metric harms performance on the true objective, suggesting a need for more robust filtering objectives when attempting to filter more aggressively. We hope this work leads to detailed analysis of the effects of dataset filtering design choices on downstream model performance in future work.",EleutherAI,"nlp/language-model, nlp/corpus-construction","The recent proliferation of ever larger language models has led to increasing demands on training data (Radford et al., 2018, 2019; Gokaslan and Cohen, 2019; Rosset, 2019; Shoeybi et al., 2019; Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2020; Brown et al., 2020; Zeng et al., 2021). This data is increasingly derived from internet corpora like Common Crawl (Radford et al., 2019; Ortiz Suárez et al., 2019; Wenzek et al., 2020; Conneau et al., 2020; Brown et al., 2020; Gao et al., 2020; Raffel et al., 2020). However, the quality of raw Common Crawl data is often insufficient to be directly used. To combat this, many existing works use some kind of proxy for quality, like a classifier between known high quality data and low quality data (Brown et al., 2020; Gao et al., 2020; Zeng et al., 2021), handcrafted heuristics (Yang et al., 2020; Raffel et al., 2020), or keeping only documents with perplexity scores that fall in some middle quantile of an existing language model (Wenzek et al., 2020). Brown et al. (2020) in particular filter extremely aggres- sively using their classifier, discarding about 98.7% of their data. Previous work has shown that models trained on heuristic-filtered datasets perform better on downstream tasks (Raffel et al., 2020). However, Gao et al. (2020) show that a perplexity-filtered CC- derived dataset actually performs worse than unfiltered CC on certain tasks. [...] We hypothesize that this decline in performance is because of misalignment between the classifier objective, intended to be a proxy for quality, and actual document quality. For instance, a classifier to distinguish WebText2 from Common Crawl, as in GPT-3, would also exclude domains of text data not found as often in WebText2.",,,, "Abeba Birhane, Vinay Uday Prabhu, Emmanuel Kahembwe – University College Dublin, Ireland; Lero, Dublin, Ireland; University of Edinburgh, UK","Multimodal datasets: misogyny, pornography, and malignant stereotypes",https://arxiv.org/abs/2110.01963,papers,20210101Z00:00:00,,"We have now entered the era of trillion parameter machine learning models trained on billion-sized datasets scraped from the internet. The rise of these gargantuan datasets has given rise to formidable bodies of critical work that has called for caution while generating these large datasets. These address concerns surrounding the dubious curation practices used to generate these datasets, the sordid quality of alt-text data available on the world wide web, the problematic content of the CommonCrawl dataset often used as a source for training large language models, and the entrenched biases in large-scale visio-linguistic models (such as OpenAI's CLIP model) trained on opaque datasets (WebImageText). In the backdrop of these specific calls of caution, we examine the recently released LAION-400M dataset, which is a CLIP-filtered dataset of Image-Alt-text pairs parsed from the Common-Crawl dataset. We found that the dataset contains, troublesome and explicit images and text pairs of rape, pornography, malign stereotypes, racist and ethnic slurs, and other extremely problematic content. We outline numerous implications, concerns and downstream harms regarding the current state of large scale datasets while raising open questions for various stakeholders including the AI community, regulators, policy makers and data subjects.","University College Dublin, Ireland; Lero, Dublin, Ireland; University of Edinburgh, UK","ai/ethics-of-machine-learning, nlp/corpus-construction, nlp/text-corpora, nlp/multimodal-corpora","1.3 The Common-Crawl Common Crawl is a San Francisco based nonprofit 501(c)(3) organization that has been regularly crawling the entire WWW and generating archival snapshot data-dumps, often termed the Common- Crawl (CC) datasets in machine learning lexicon, since 2011. The current version of this archive (dated April 2021) is roughly 320 TB in size and spans 3.1 billion pages. The sheer scale of this dataset has an enduring allure in the AI community and has been used as a seeding dataset in training pipelines of high-profile projects⁵ [⁵https://commoncrawl.org/the-data/examples/] such as GPT-3 [34], CLUECorpus2020 [35], and XLM-R [36]. Inevitably this gargantuan dataset mined from the WWW suffers from serious issues. For instance, Matic et al. [37] used the Curlie.org crowdsourced taxonomy project to train a GDPR-Article(9)-Sensitive-URL classifier which revealed that, of the 1 Billion URLs they audited in the Common Crawl project, 155 million URLs fell into the sensitive category. The Realtoxicityprompts work [38] revealed that CommonCrawl contained over 300,000 documents from unreliable news sites and banned subReddit pages containing hate speech and racism. More recently, Luccioni and Viviano’s initial study [39] placed the ‘Hate speech’ content level to be around 4.02%-5.24% (the 1+ hate n-grams level was estimated higher at 17.78%). With regards to CCAligned, a 119- language parallel dataset built off 68 snapshots of Common Crawl, Caswell et al. [40] revealed that there were notable amounts of pornographic content (> 10%) found for 11 languages with prevalence rates being as high as 24% for language pairs such as en-om_KE. The LAION-400M dataset emerges from this landscape containing hundreds of millions of Image- Alt-text pairs parsed from the Common-Crawl dataset and filtered using a previously Common-Crawl trained AI model (CLIP [2]). With this background, we present our findings following our initial audit of the LAION-400M dataset below.",,,, "Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, Aran Komatsuzaki – LAION.ai; Gentec Data, Romania; Technical University of Munich, Germany; Juelich Supercomputing Center, Germany; Georgia Institute of Technology; USA; EleutherAI",LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs,https://arxiv.org/abs/2111.02114,papers,20210101Z00:00:00,,"Multi-modal language-vision models trained on hundreds of millions of image-text pairs (e.g. CLIP, DALL-E) gained a recent surge, showing remarkable capability to perform zero- or few-shot learning and transfer even in absence of per-sample labels on target image data. Despite this trend, to date there has been no publicly available datasets of sufficient scale for training such models from scratch. To address this issue, in a community effort we build and release for public LAION-400M, a dataset with CLIP-filtered 400 million image-text pairs, their CLIP embeddings and kNN indices that allow efficient similarity search.","LAION.ai; Gentec Data, Romania; Technical University of Munich, Germany; Juelich Supercomputing Center, Germany; Georgia Institute of Technology; USA; EleutherAI","nlp/corpus-construction, nlp/multimodal-corpora","2.1 Distributed processing of Common Crawl¶ To create image-text pairs, we parse through WAT files from Common Crawl and parse out all HTML IMG tags containing an alt-text attribute. We download the raw images from the parsed URLs with asynchronous requests using Trio and Asks libraries.¶ 2.1.1 Filtering out unsuitable image-text pairs¶ After downloading the WAT files from Common Crawl, we apply the following filtering conditions: • All samples with less than 5 character alt-text length or less than 5 KB image size are dropped.¶ • Duplicate removal is performed with bloom filter based on URL and alt-text.¶ • We use CLIP to compute embeddings of the image and alt-text. Then we compute the cosine similarity of both embeddings and drop all samples with cosine similarity below 0.3. This threshold was selected based on human inspections.¶ • We use the CLIP embeddings of images and texts to filter out illegal contents.",,LAION-400M,, "Michael Bugert, Iryna Gurevych – Ubiquitous Knowledge Processing Lab (UKP), Technical University of Darmstadt, Germany",Event Coreference Data (Almost) for Free: Mining Hyperlinks from Online News,https://aclanthology.org/2021.emnlp-main.38,papers,20210101Z00:00:00,,"Cross-document event coreference resolution (CDCR) is the task of identifying which event mentions refer to the same events throughout a collection of documents. Annotating CDCR data is an arduous and expensive process, explaining why existing corpora are small and lack domain coverage. To overcome this bottleneck, we automatically extract event coreference data from hyperlinks in online news: When referring to a significant real-world event, writers often add a hyperlink to another article covering this event. We demonstrate that collecting hyperlinks which point to the same article(s) produces extensive and high-quality CDCR data and create a corpus of 2M documents and 2.7M silver-standard event mentions called HyperCoref. We evaluate a state-of-the-art system on three CDCR corpora and find that models trained on small subsets of HyperCoref are highly competitive, with performance similar to models trained on gold-standard data. With our work, we free CDCR research from depending on costly human-annotated training data and open up possibilities for research beyond English CDCR, as our data extraction approach can be easily adapted to other languages.","Ubiquitous Knowledge Processing Lab (UKP), Technical University of Darmstadt, Germany","nlp/coreference resolution, event detection","To this end, we devise a data extraction pipeline which mines such datasets automatically from Common Crawl² [²https://commoncrawl.org/] and apply it to create the HYPERCOREF corpus, consisting of 40 news outlets with over 2M mentions in total, far exceeding the size of existing CDCR corpora.",,,, "Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Dmytro Okhonko, Samuel Broscheit, Gautier Izacard, Patrick Lewis, Barlas Oğuz, Edouard Grave, Wen-tau Yih, Sebastian Riedel – Facebook AI Research; University College London, United Kingdom; University of Mannheim, Germany; ENS, PSL University, France; Inria, France; University of Washington, United States",The Web Is Your Oyster - Knowledge-Intensive NLP against a Very Large Web Corpus,https://arxiv.org/abs/2112.09924,papers,20210101Z00:00:00,"Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Information Retrieval (cs.IR), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences","In order to address increasing demands of real-world applications, the research for knowledge-intensive NLP (KI-NLP) should advance by capturing the challenges of a truly open-domain environment: web-scale knowledge, lack of structure, inconsistent quality and noise. To this end, we propose a new setup for evaluating existing knowledge intensive tasks in which we generalize the background corpus to a universal web snapshot. We investigate a slate of NLP tasks which rely on knowledge - either factual or common sense, and ask systems to use a subset of CCNet - the Sphere corpus - as a knowledge source. In contrast to Wikipedia, otherwise a common background corpus in KI-NLP, Sphere is orders of magnitude larger and better reflects the full diversity of knowledge on the web. Despite potential gaps in coverage, challenges of scale, lack of structure and lower quality, we find that retrieval from Sphere enables a state of the art system to match and even outperform Wikipedia-based models on several tasks. We also observe that while a dense index can outperform a sparse BM25 baseline on Wikipedia, on Sphere this is not yet possible. To facilitate further research and minimise the community's reliance on proprietary, black-box search engines, we share our indices, evaluation metrics and infrastructure.","Facebook AI Research; University College London, United Kingdom; University of Mannheim, Germany; ENS, PSL University, France; Inria, France; University of Washington, United States","nlp/question-answering, nlp/knowledge-intensive-tasks, ai/knowledge-base","[…] CCNet processes Common Crawl by performing deduplication, language identification and quality filtering (articles are split into three quality tiers: head, […] We pick the CCNet snapshot corresponding to the August 2019 Common Crawl […]",,,CCNet, "Metod Jazbec, Barna Pàsztor, Felix Faltings, Nino Antulov-Fantulin, Petter N. Kolm – ETH Zurich, Switzerland; New York University, New York, USA",On the Impact of Publicly Available News and Information Transfer to Financial Markets,https://royalsocietypublishing.org/doi/10.1098/rsos.202321,papers,20210101Z00:00:00,,"We quantify the propagation and absorption of large-scale publicly available news articles from the World Wide Web to financial markets. To extract publicly available information, we use the news archives from the Common Crawl, a non-profit organization that crawls a large part of the web. We develop a processing pipeline to identify news articles associated with the constituent companies in the S&P 500 index, an equity market index that measures the stockperformance of US companies. Using machine learning techniques, we extract sentiment scores from the Common Crawl News data and employ tools from information theory to quantify the information transfer from public news articles to the US stock market. Furthermore, we analyse and quantify the economic significance of the news-based information with a simple sentiment-based portfolio trading strategy. Our findings provide support for that information in publicly available news on the World Wide Web has a statistically and economically significant impact on events infinancial markets.","ETH Zurich, Switzerland; New York University, New York, USA","statistical-finance, ai/machine-learning, nlp/sentiment-analysis, financial-markets",,,,, "Daniel Varab, Natalie Schluter – IT University of Copenhagen, Denmark","MassiveSumm: a very large-scale, very multilingual, news summarisation dataset",https://aclanthology.org/2021.emnlp-main.797,papers,20210101Z00:00:00,,"Current research in automatic summarisation is unapologetically anglo-centered{--}a persistent state-of-affairs, which also predates neural net approaches. High-quality automatic summarisation datasets are notoriously expensive to create, posing a challenge for any language. However, with digitalisation, archiving, and social media advertising of newswire articles, recent work has shown how, with careful methodology application, large-scale datasets can now be simply gathered instead of written. In this paper, we present a large-scale multilingual summarisation dataset containing articles in 92 languages, spread across 28.8 million articles, in more than 35 writing scripts. This is both the largest, most inclusive, existing automatic summarisation dataset, as well as one of the largest, most inclusive, ever published datasets for any NLP task. We present the first investigation on the efficacy of resource building from news platforms in the low-resource language setting. Finally, we provide some first insight on how low-resource language settings impact state-of-the-art automatic summarisation system performance.","IT University of Copenhagen, Denmark","nlp/text-summarization, nlp/corpus-construction","Comparing with web-scrape multilingual datasets. We compared the intersection of our dataset with two large-scale web datasets widely used by the NLP community: Wikipedia⁴ [⁴https://en.wikipedia.org/wiki/List_of_Wikipedias#Edition_details as of May 10 2021] and Common Crawl⁵ [⁵April 2021 crawl CC-MAIN-2021-04 https://commoncrawl.github.io/cc-crawl-statistics/plots/languages.csv]. An overview of this comparison can be found in Table 4. The manual care that we took in curating the list of platforms from which we wanted to collect data resulted in more data from an improved diversity of languages. For 52 of our languages, MS-All either matches or surpasses the number of Wikipedia pages for the language in question, showing the importance of the full dataset simply as raw data. In fact, the majority of MassiveSumm languages from South Saharan Africa (14/18) have more documents in MS-All than in Wikipedia. And well over half of the MassiveSumm languages for Eurasia (38/63) have more documents in MS-All than in Wikipedia. Turning to Common Crawl, almost half of the languages from South Saharan Africa (8/18) have more pages in MS-All than in Common Crawl. Six out of 63 Eurasian languages have more articles in MS-All than in Common Crawl. When we consider even just the heavily filtered automatic summarisation portion of the data, MS, we find that 10 of the South Saharan African lan- guages contain more pages than Wikipedia, and 5 out of 18 of these languages contain more data than Common Crawl. For Eurasia, 19 of the 63 languages contain more pages than Wikipedia. Table 5 gives the proportions of the articles in MS-All that are also contained in Common Crawl, for those languages where more than 49\% can be obtained. This is 18 languages–around a fifth of the languages represented by MassiveSumm. Hence observe that large portions of easily indexible and crawlable, publicly available, diverse linguistic data are not being scraped into one of the most important datasets for NLP, both in size, but in determining to a large extent which languages get mainstream NLP research: Common Crawl. 5 Reflections on Low-Resource Language Automatic Summarisation The central datasets for automatic summarisation have consistently been for English. In this section we consider how this focus on English has resulted in limited dataset curation methodology development (Section 5.1) and limited automatic summarisation system design (Section 5.2).",,,, Sebastian Nagel – Common Crawl,From web graphs to prioritizing web crawls,https://doi.org/10.5281/zenodo.6044920,papers,20210101Z00:00:00,,,Common Crawl,"web crawling, web-science/hyperlinkgraph",,,,, "Simran Khanuja, Diksha Bansal, Sarvesh Mehtani, Savya Khosla, Atreyee Dey, Balaji Gopalan, Dilip Kumar Margam, Pooja Aggarwal, Rajiv Teja Nagipogu, Shachi Dave, Shruti Gupta, Subhash Chandra Bose Gali, Vish Subramanian, Partha Talukdar – Google; Indian Institute of Technology, Patna, India; Indian Institute of Technology, Bombay, India; Delhi Technological University, India",MuRIL: Multilingual Representations for Indian Languages,https://arxiv.org/abs/2103.10730,papers,20210101Z00:00:00,,,"Google; Indian Institute of Technology, Patna, India; Indian Institute of Technology, Bombay, India; Delhi Technological University, India","nlp/language-model, nlp/corpus-construction",Monolingual Data: We collect monolingual data for the 17 languages mentioned above from the Common Crawl OSCAR corpus¹ and Wikipedia².,,,OSCAR, "Michael Völske, Janek Bevendorff, Johannes Kiesel, Benno Stein, Maik Fröbe, Matthias Hagen, Martin Potthast – Bauhaus-Universität Weimar, Germany; Martin-Luther-Universität Halle-Wittenberg, Germany; Leipzig University, Germany",Web Archive Analytics,https://dl.gi.de/handle/20.500.12116/34759,papers,20210101Z00:00:00,,"Web archive analytics is the exploitation of publicly accessible web pages and their evolution for research purposes—to the extent organizationally possible for researchers. In order to better understand the complexity of this task, the first part of this paper puts the entirety of the world's captured, created, and replicated data (the “Global Datasphere”) in relation to other important data sets such as the public internet and its web pages, or what is preserved thereof by the Internet Archive. Recently, the Webis research group, a network of university chairs to which the authors belong, concluded an agreement with the Internet Archive to download a substantial part of its web archive for research purposes. The second part of the paper in hand describes our infrastructure for processing this data treasure: We will eventually host around 8 PB of web archive data from the Internet Archive and Common Crawl, with the goal of supplementing existing large scale web corpora and forming a non-biased subset of the 30 PB web archive at the Internet Archive.","Bauhaus-Universität Weimar, Germany; Martin-Luther-Universität Halle-Wittenberg, Germany; Leipzig University, Germany","web-archiving, big data, data processing","In the Webis research group, we aim to store up to 8 PB of web archive data on our own premises, much of it originating from the Internet Archive, but also from other sources, such as the Common Crawl. [...] As of October 2020, almost 2.3 PB of data—of which 560 TB stem from the Internet Archive and the rest from the Common Crawl—have been downloaded and are stored on our infrastructure.",,,,