"cc_project_author","post_title","cc_project_url","cc_project_category","post_date","keywords","abstract","cc_author_affiliation","cc_class","cc_snippet","cc_dataset_used","cc_derived_dataset_about","cc_derived_dataset_used","cc_derived_dataset_cited" "Ahad Rana – Common Crawl","Common Crawl – Building an open web-scale crawl using Hadoop","https://www.slideshare.net/hadoopusergroup/common-crawlpresentation","papers","20100101Z00:00:00","","","Common Crawl","web-crawling, big data, Hadoop","","","","","" "Hannes Mühleisen, Christian Bizer – Freie Universität, Berlin, Germany","Web Data Commons – Extracting Structured Data from Two Large Web Corpora","http://ceur-ws.org/Vol-937/ldow2012-inv-paper-2.pdf","papers","20120101Z00:00:00","","","Freie Universität, Berlin, Germany","","","","","","" "Alexandra Birch, Nadir Durrani, Phillip Koehn – School of Informatics, University of Edinburgh","Edinburgh SLT and MT System Description for the IWSLT 2013","http://workshop2013.iwslt.org/downloads/Edinburgh_SLT_and_MT_System_Description_for_the_IWSLT_2013_Evaluation.pdf","papers","20130101Z00:00:00","","","School of Informatics, University of Edinburgh","","","","","","" "Jason R. Smith, Herve Saint-Amand, Magdalena Plamada, Phillipp Koehn, Chris Callison-Burch, Adam Lopez – Johns Hopkins University, University of Edinburgh, University of Zurich, University of Pennsylvania","Dirt Cheap Web-Scale Parallel Text from the Common Crawl","http://www.cs.jhu.edu/~ccb/publications/bitexts-from-common-crawl.pdf","papers","20130101Z00:00:00","","","Johns Hopkins University, University of Edinburgh, University of Zurich, University of Pennsylvania","","","","","","" "Sara Stymne, Christian Hardmeier, Jorg Tiedemann, Joakim Nivre – Uppsala University: Department of Linguistics and Philology","Tunable Distortion Limits and Corpus Cleaning for SMT","http://statmt.org/wmt13/pdf/WMT29.pdf","papers","20130101Z00:00:00","","","Uppsala University: Department of Linguistics and Philology","","","","","","" "Thanh-Le Ha, Teresa Herrmann, Jan Niehues, Mohammed Mediani, Eunah Cho, Yuqi Zhang, Isabel Slawik, Alex Waibel – Institute for Anthropomatics","The KIT Translation Systems for IWSLT 2013","http://workshop2013.iwslt.org/downloads/The_KIT_Translation_Systems_for_IWSLT_2013.pdf","papers","20130101Z00:00:00","","","Institute for Anthropomatics","","","","","","" "Wanno Drijfhout, Oliver Jundt, Lesley Wevers, Djoerd Hiemstra – University of Twente","Traitor: Associating Concepts using the World Wide Web","http://doc.utwente.nl/88328/","papers","20130101Z00:00:00","","","University of Twente","","","","","","" "Christian Bizer, Kai Eckert, Robert Meusel, Hannes Mühleisen, Michael Schuhmacher, Johanna Völker – Data and Web Science Group – University of Mannhein, Database Architectures Group, Centrum Wiskunde & Informatica, Netherlands","Deployment of RDFa, Microdata, and Microformats on the Web – A Quantitative Analysis","http://hannes.muehleisen.org/Bizer-etal-DeploymentRDFaMicrodataMicroformats-ISWC-InUse-2013.pdf","papers","20130101Z00:00:00","","","Data and Web Science Group – University of Mannhein, Database Architectures Group, Centrum Wiskunde & Informatica, Netherlands","","","","","","" "Jeffrey Pennington, Richard Socher, Christopher D. Manning – Stanford University, California, USA","GloVe: Global vectors for word representation","https://aclanthology.org/D14-1162.pdf","papers","20140101Z00:00:00","","","Stanford University, California, USA","nlp/word-embeddings","We trained our model on five corpora of varying sizes: [...] and on 42 billion tokens of web data, from Common Crawl⁵ [⁵ To demonstrate the scalability of the model, we also trained it on a much larger sixth corpus, containing 840 billion tokens of web data, but in this case we did not lowercase the vocabulary, so the results are not directly comparable.].","","","","" "Mohammed Mediani, Joshua Winebarger, Alexander Waibel – Karlsruhe Institute of Technology, Germany","Improving In-Domain Data Selection For Small In-Domain Sets","http://www.statmt.org/OSMOSES/IWSLT-36.pdf","papers","20140101Z00:00:00","","","Karlsruhe Institute of Technology, Germany","","","","","","" "Junfei Guo, Juan Liu, Qi Han, Andreas Maletti – School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany","A Tunable Language Model for Statistical Machine Translation","http://www.ims.uni-stuttgart.de/institut/mitarbeiter/maletti/pub/guoliuhanmal14.pdf","papers","20140101Z00:00:00","","","School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany","","","","","","" "Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, Andrew Y. Ng – Baidu Research – Silicon Valley AI Lab","Deep Speech: Scaling up end-to-end speech recognition","http://arxiv.org/pdf/1412.5567v2.pdf","papers","20140101Z00:00:00","","","Baidu Research – Silicon Valley AI Lab","","","","","","" "Eva Hasler, Philipp Koehn, Barry Haddow, Phil Blunsom – University of Edinburgh; University of Oxford","Dynamic Topic Adaptation for Phrase-based MT","http://www.aclweb.org/anthology/E/E14/E14-1035.pdf","papers","20140101Z00:00:00","","","University of Edinburgh; University of Oxford","","","","","","" "Michele Tortelli – Politecnico di Bari","Bloom filter-based Routing in NDN","http://www.poliba.it/Didattica/docs/scorepoliba2014_submission_179.pdf","papers","20140101Z00:00:00","","","Politecnico di Bari","","","","","","" "Filip Ginter, Jenna Kanerva – University of Turku","Fast Training of word2vec Representations Using N-gram Corpora","http://www2.lingfil.uu.se/SLTC2014/abstracts/sltc2014_submission_27.pdf","papers","20140101Z00:00:00","","","University of Turku","","","","","","" "Petar Petrovski, Volha Bryl, Christian Bizer – University of Mannheim, Germany- Research Group Data and Web Science","Learning Regular Expressions for the Extraction of Product Attributes from E-commerce Microdata","http://ceur-ws.org/Vol-1267/LD4IE2014_Petrovski.pdf","papers","20140101Z00:00:00","","","University of Mannheim, Germany- Research Group Data and Web Science","","","","","","" "Robert Meusel, Petar Petrovski, Christian Bizer – University of Mannheim, Germany- Research Group Data and Web Science","The Web Data Commons Microdata, RDFa and Microformat Dataset Series","http://link.springer.com/chapter/10.1007/978-3-319-11964-9_18#page-1","papers","20140101Z00:00:00","","","University of Mannheim, Germany- Research Group Data and Web Science","","","","","","" "Robert Meusel, Peter Mika, Roi Blanko – University of Mannheim; Yahoo Labs- Barcelona","Focused Crawling for Structured Data","http://dl.acm.org/citation.cfm?id=2661902","papers","20140101Z00:00:00","","","University of Mannheim; Yahoo Labs- Barcelona","","","","","","" "Chenchen Ding, Masao Utiyama, Eiichiro Sumita – National Institute of Information and Communications Technology Japan","Document-level Re-ranking with Soft Lexical and Semantic Features for Statistical Machine Translation","http://www.mibel.cs.tsukuba.ac.jp/~tei/AMTA2014.pdf","papers","20140101Z00:00:00","","","National Institute of Information and Communications Technology Japan","","","","","","" "Masumi Shirakawa, Kotaro Nakayama, Eiji Aramaki, Takahiro Hara, Shojiro Nishio – Osaka University","Collecting Conceptualized Relations from Terabytes of Web Texts for Understanding Unknown Terms","http://dl.acm.org/citation.cfm?id=2682777","papers","20140101Z00:00:00","","","Osaka University","","","","","","" "Jenna Kanerva, Juhani Luotolahti, Veronika Laippala, Filip Ginter – University of Turku","Syntactic N-gram Collection from a Large-Scale Corpus of Internet Finnish","http://ebooks.iospress.nl/volumearticle/38025","papers","20140101Z00:00:00","","","University of Turku","","","","","","" "Willem Robert van Hage, Thomas Ploeger, Jesper Hoeksema – SynerScope B.V., VU University Amsterdam","Number frequency on the web","http://dl.acm.org/citation.cfm?id=2576962","papers","20140101Z00:00:00","","","SynerScope B.V., VU University Amsterdam","","","","","","" "Christian Buck, Kenneth Heafield, Bas van Ooyen – University of Edinburgh, Stanford University, Owlin BV","N-gram Counts and Language Models from the Common Crawl","http://statmt.org/ngrams/BuckEtAl_LREC2014_CommonCrawlLM.pdf","papers","20140101Z00:00:00","","","University of Edinburgh, Stanford University, Owlin BV","","","","","","" "Christian Hardmeier, Sara Stymne, Jörg Tiedemann, Aaron Smith, Joakim Nivre – Uppsala University: Department of Linguistics and Philology","Anaphora Models and Reordering for Phrase-Based SMT","http://acl2014.org/acl2014/W14-33/pdf/W14-3312.pdf","papers","20140101Z00:00:00","","","Uppsala University: Department of Linguistics and Philology","","","","","","" "Lane O. B. Schwartz, Timothy Anderson, Jeremy Gwinnup, Katherine M. Young – Air Force Research Laboratory, SRA International, N-Space Analysis LLC","Machine Translation and Monolingual Postediting:The AFRL WMT-14 System","http://www.ling.uni-potsdam.de/~koller/aclpub/W14-33/cdrom/pdf/W14-3321.pdf","papers","20140101Z00:00:00","","","Air Force Research Laboratory, SRA International, N-Space Analysis LLC","","","","","","" "Hoang Cuong, Khalil Sima’an – University of Amsterdam - Institute for Logic, Language and Computation","Latent Domain Translation Models in Mix-of-Domains Haystack","http://www.aclweb.org/anthology/C/C14/C14-1182.pdf","papers","20140101Z00:00:00","","","University of Amsterdam - Institute for Logic, Language and Computation","","","","","","" "Thomas Steiner, Hannes Mühleisen, Ruben Verborgh, Pierre-Antoine Champin, Benoît Encelle, Yannick Prié – Université de Lyon, Database Architectures Group; Multimedia Lab, Ghent University; iMinds, Université de Nantes","Weaving the Web(VTT) of Data","http://telemedicina.unifesp.br/pub/Events/2013-05%20-%20WWW2013/www2013/www2013.org/companion/p1399.pdf","papers","20140101Z00:00:00","","","Université de Lyon, Database Architectures Group; Multimedia Lab, Ghent University; iMinds, Université de Nantes","","","","","","" "Marcin Wylot, Philippe Cudré-Mauroux, Paul Groth – eXascale Infolab, University of Fribourg; VU University Amsterdam","TripleProv: Efficient Processing of Lineage Queries in a Native RDF Store","http://exascale.info/sites/default/files/TipleProv.pdf","papers","20140101Z00:00:00","","","eXascale Infolab, University of Fribourg; VU University Amsterdam","","","","","","" "Robert Meusel, Sebastiano Vigna, Oliver Lehmberg, Christian Bizer – Data and Web Science Group - University of Mannheim, Laboratory for Web - Algorithmics Università degli Studi di Milano","Graph Structure in the Web — Revisited","http://vigna.di.unimi.it/ftp/papers/GraphStructureRevisited.pdf","papers","20140101Z00:00:00","","","Data and Web Science Group - University of Mannheim, Laboratory for Web - Algorithmics Università degli Studi di Milano","","","","","","" "Calvin Ardi, John Heidemann – USC/Information Sciences Institute","Web-scale Content Reuse Detection","ftp://ftp.isi.edu/isi-pubs/tr-692.pdf","papers","20140101Z00:00:00","","","USC/Information Sciences Institute","","","","","","" "Yuta Tsuboi – IBM Resarch","Neural Networks Leverage Corpus-wide Information for Part-of-speech Tagging","http://2boy.org/~yuta/publications/neuraltagger-emnlp2014-tsuboi.pdf","papers","20140101Z00:00:00","","","IBM Resarch","","","","","","" "Mauro Cettolo, Nicola Bertoldi, Marcello Federico, Holger Schwenk, Loïc Barrault, Christophe Servan – Fondazione Bruno Kessler, University of Le Mans, Xerox Research Centre Europe","Translation project adaptation for MT-enhanced computer assisted translation","http://link.springer.com/article/10.1007/s10590-014-9152-1","papers","20140101Z00:00:00","","","Fondazione Bruno Kessler, University of Le Mans, Xerox Research Centre Europe","","","","","","" "Germán Sanchis-Trilles, Daniel Ortiz-Martınez, Francisco Casacuberta – PRHLT Centre - Universidad Politécnica de Valencia","Efficient Wordgraph Pruning for Interactive Translation Prediction","http://www.casmacat.eu/uploads/Main/2eamt2014.pdf","papers","20140101Z00:00:00","","","PRHLT Centre - Universidad Politécnica de Valencia","","","","","","" "Vasilis Kolias, Ioannis Anagnostopoulos, Eleftherios Kayafas – National Technical University of Athens, University of Thessaly","Exploratory Analysis of a Terabyte Scale Web Corpus","http://arxiv.org/abs/1409.5443","papers","20140101Z00:00:00","","","National Technical University of Athens, University of Thessaly","","","","","","" "Masahiro Mizukami, Graham Neubig, Sakriani Sakti, Tomoki Toda, Satoshi Nakamura – Nara Institute of Science and Technology","Building a Free General-Domain Paraphrase Database for Japanese","http://isw3.naist.jp/~masahiro-mi/paper/ma14cocosda.pdf","papers","20140101Z00:00:00","","","Nara Institute of Science and Technology","","","","","","" "Robert Meusel, Sebastiano Vigna, Oliver Lehmberg, Christian Bizer – University of Mannheim, Germany; Università degli Studi di Milano, Italy","The Graph Structure in the Web – Analyzed on Different Aggregation Levels","https://pdfs.semanticscholar.org/b5d5/88298e6845b4bfd40ea779ce21e628239ef3.pdf","papers","20150101Z00:00:00","","","University of Mannheim, Germany; Università degli Studi di Milano, Italy","web-science/hyperlinkgraph","","","","","" "Alex Stolz, Martin Hepp – Universitaet der Bundeswehr Munich, Germany","Towards Crawling the Web for Structured Data: Pitfalls of Common Crawl for E-Commerce","http://ceur-ws.org/Vol-1426/paper-04.pdf","papers","20150101Z00:00:00","","","Universitaet der Bundeswehr Munich, Germany","nlp/corpus-representativeness, semantic web, microdata, e-commerce","","","","","" "Julian Eberius, Maik Thiele, Katrin Braunschweig, Wolfgang Lehner – Technische Universität Dresden, Germany","Top-k Entity Augmentation Using Consistent Set Covering","https://www.semanticscholar.org/paper/Top-k-entity-augmentation-using-consistent-set-Eberius-Thiele/a554fe7c49837e2d2d995e00fd3b62a6ca5650f2","papers","20150101Z00:00:00","","","Technische Universität Dresden, Germany","semantic web, web tables, web mining","To enable repeatability we publish the implementation², but also include the web table corpus used for the evaluation³. This corpus contains 100M Web tables extracted from a publicly available Web crawl⁴ [4: http://commoncrawl.org]","","{DresdenWebTableCorpus}","","" "Matthew Malensek, Sangmi Lee Pallickara, Shrideep Pallickara – Colorado State University","Alleviation of Disk I/O Contention in Virtualized Settings for Data-Intensive Computing","http://galileo.cs.colostate.edu/papers/DiskInterference-BDC.pdf","papers","20150101Z00:00:00","","","Colorado State University","","","","","","" "Titus Barik, Kevin Lubick, Justin Smith, John Slankas, Emerson Murphy-Hill – ABB Corporate Research and North Carolina State University","FUSE: A Reproducible, Extendable, Internet-scale Corpus of Spreadsheets","http://kjlubick.github.io/pubs/MSR2015-Fuse_spreadsheet_corpus.pdf","papers","20150101Z00:00:00","","","ABB Corporate Research and North Carolina State University","","","","","","" "Joachim Daiber, Lautaro Quiroz, Roger Wechsler, Stella Frank – University of Amsterdam","Splitting Compounds by Semantic Analogy","https://ufal.mff.cuni.cz/~rosa/2015/docs/dmtw2015.pdf#page=26","papers","20150101Z00:00:00","","","University of Amsterdam","","","","","","" "Mikhail Galkin, Dmitry Mouromtsev, Sören Auer – IMTO University- St. Petersburg, Russia, University of Bonn- Germany","Identifying Web Tables –Supporting a Neglected Type of Content on the Web","http://arxiv.org/pdf/1503.06598.pdf","papers","20150101Z00:00:00","","","IMTO University- St. Petersburg, Russia, University of Bonn- Germany","","","","","","" "Brendan Juba – Washington University in St. Louis","Principled Sampling for Anomaly Detection","http://www.cse.wustl.edu/~bjuba/papers/anomaly_detection.pdf","papers","20150101Z00:00:00","","","Washington University in St. Louis","","","","","","" "Kowalczuk Ewa, Jedrzej Potoniec, Agnieszka Ławrynowicz – Institute of Computing Science, Poznan University of Technology, Poland","Extracting Usage Patterns of Ontologies on the Web: a Case Study on GoodRelations Vocabulary in RDFa","http://ceur-ws.org/Vol-1265/owled2014_submission_14.pdf","papers","20150101Z00:00:00","","","Institute of Computing Science, Poznan University of Technology, Poland","","","","","","" "Junfei Guo, Juan Liu, Qi Han, Andreas Maletti – School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany","A Tunable Language Model for Statistical Machine Translation","http://www.ims.uni-stuttgart.de/institut/mitarbeiter/maletti/pub/guoliuhanmal14.pdf","papers","20150101Z00:00:00","","","School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany","","","","","","" "Kay Ousterhout, Ryan Rasti, Sylvia Ratnasamy, Scott Shenker, Byung-Gon Chun – UC Berkeley, ICSI, Vmware, Seoul National University","Making Sense of Performance in Data Analytics Frameworks","http://www.eecs.berkeley.edu/~keo/publications/nsdi15-final147.pdf","papers","20150101Z00:00:00","","","UC Berkeley, ICSI, Vmware, Seoul National University","","","","","","" "Evan Jaffe, Lifeng Jin, David King, Marten van Schinjdel – Dept. of Linguistics, Ohio State University","Azmat: Sentence Similarity using Associative Matrices","http://www.ling.ohio-state.edu/~vanschm/resources/uploads/jaffe_etal-2015-semeval.pdf","papers","20150101Z00:00:00","","","Dept. of Linguistics, Ohio State University","","","","","","" "Alexander A Alemi, Paul Ginsparg – Dept. of Physics, Cornell University, Dept. of Physics and Information Science, Cornell University","Text Segmentation based on Semantic Word Embeddings","http://arxiv.org/pdf/1503.05543.pdf","papers","20150101Z00:00:00","","","Dept. of Physics, Cornell University, Dept. of Physics and Information Science, Cornell University","","","","","","" "Ivan Habernal, Omnia Zayed, Iryna Gurevych – University of Darmstadt, Germany","C4Corpus: Multilingual Web-Size Corpus with Free License","http://www.lrec-conf.org/proceedings/lrec2016/pdf/388_Paper.pdf","papers","20160101Z00:00:00","","Large Web corpora containing full documents with permissive licenses are crucial for many NLP tasks. In this article we present the construction of 12 million-pages Web corpus (over 10 billion tokens) licensed under CreativeCommons license family in 50+ languages that has been extracted from CommonCrawl, the largest publicly available general Web crawl to date with about 2 billion crawled URLs. Our highly-scalable Hadoop-based framework is able to process the full CommonCrawl corpus on 2000+ CPU cluster on the Amazon Elastic Map/Reduce infrastructure. The processing pipeline includes license identification, state-of-the-art boilerplate removal, exact duplicate and near-duplicate document removal, and language detection. The construction of the corpus is highly configurable and fully reproducible, and we provide both the framework (DKPro C4CorpusTools) and the resulting data (C4Corpus) to the research community.","University of Darmstadt, Germany","nlp/corpus-construction, legal/copyright, license/creative-commons, nlp/boilerplate-removal, ir/duplicate-detection","","CC-MAIN-2016-07","{DKPro-C4}","","" "Roland Schäfer – Freie Universität Berlin, Germany","CommonCOW: Massively Huge Web Corpora from CommonCrawl Data and a Method to Distribute them Freely under Restrictive EU Copyright Laws","http://rolandschaefer.net/?p=994","papers","20160101Z00:00:00","","In this paper, I describe a method of creating massively huge web corpora from the CommonCrawl data sets and redistributing the resulting annotations in a stand-off format. Current EU (and especially German) copyright legislation categorically forbids the redistribution of downloaded material without express prior permission by the authors. Therefore, stand-off annotations or other derivates are the only format in which European researchers (like myself) are allowed to re-distribute the respective corpora. In order to make the full corpora available to the public despite such restrictions, the stand-off format presented here allows anybody to locally reconstruct the full corpora with the least possible computational effort.","Freie Universität Berlin, Germany","nlp/corpus-construction, legal/copyright","","","{CommonCOW}","","" "Roland Schäfer – Freie Universität Berlin, Germany","Accurate and Efficient General-purpose Boilerplate Detection for Crawled Web Corpora","https://doi.org/10.1007/s10579-016-9359-2","papers","20170101Z00:00:00","Boilerplate, Corpus construction, Non-destructive corpus normalization, Web corpora","Removal of boilerplate is one of the essential tasks in web corpus construction and web indexing. Boilerplate (redundant and automatically inserted material like menus, copyright notices, navigational elements, etc.) is usually considered to be linguistically unattractive for inclusion in a web corpus. Also, search engines should not index such material because it can lead to spurious results for search terms if these terms appear in boilerplate regions of the web page. The size of large web corpora necessitates the use of efficient algorithms while a high accuracy directly improves the quality of the final corpus. In this paper, I present and evaluate a supervised machine learning approach to general-purpose boilerplate detection for languages based on Latin alphabets which is both very efficient and very accurate. Using a Multilayer Perceptron and a high number of carefully engineered features, I achieve between 95\% and 99\% correct classifications (depending on the input language) with precision and recall over 0.95. Since the perceptrons are trained on language-specific data, I also evaluate how well perceptrons trained on one language perform on other languages. The single features are also evaluated for the merit they contribute to the classification. I show that the accuracy of the Multilayer Perceptron is on a par with that of other classifiers such as Support Vector Machines. I conclude that the quality of general-purpose boilerplate detectors depends mainly on the availability of many well-engineered features and which are highly language-independent. The method has been implemented in the open-source texrex web page cleaning software, and large corpora constructed using it are available from the COW initiative, including the CommonCOW corpora created from CommonCrawl data sets.","Freie Universität Berlin, Germany","nlp/boilerplate-removal, nlp/web-as-corpus, nlp/corpus-construction","","","","","" "Daniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkova, Jan Hajic jr., Jaroslava Hlavacova, Václava Kettnerová, Zdenka Uresova, Jenna Kanerva, Stina Ojala, Anna Missilä, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria dePaiva, Kira Droganova, Héctor Martínez Alonso, Çağrı Çöltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadová, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonca, Tatiana Lando, Rattima Nitisaroj, Josie Li – Charles University, Czech Republic; Uppsala University, Sweden; University of Turku, Finland; University of Cambridge; Google; Bauhaus-Universität Weimar, Germany; UiT The Arctic University of Norway; University of the Basque Country, Spain; Istanbul Technical University, Turkey; Stanford University; New York University Abu Dhabi; City University of Hong Kong; Ohio State University, USA; University of Turin, Italy; University of Pisa, Italy; IBM Research; Nuance Communications; INRIA – Paris 7, France; University of Tübingen, Germany; DFKI, Germany; text & form, Germany","CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies","http://www.aclweb.org/anthology/K/K17/K17-3001.pdf","papers","20170101Z00:00:00","","The Conference on Computational Natural Language Learning (CoNLL) features a shared task, in which participants train and test their learning systems on the same data sets. In 2017, the task was devoted to learning dependency parsers for a large number of languages, in a real-world setting without any gold-standard annotation on input. All test sets followed a unified annotation scheme, namely that of Universal Dependencies. In this paper, we define the task and evaluation methodology, describe how the data sets were prepared, report and analyze the main results, and provide a brief categorization of the different approaches of the participating systems.","Charles University, Czech Republic; Uppsala University, Sweden; University of Turku, Finland; University of Cambridge; Google; Bauhaus-Universität Weimar, Germany; UiT The Arctic University of Norway; University of the Basque Country, Spain; Istanbul Technical University, Turkey; Stanford University; New York University Abu Dhabi; City University of Hong Kong; Ohio State University, USA; University of Turin, Italy; University of Pisa, Italy; IBM Research; Nuance Communications; INRIA – Paris 7, France; University of Tübingen, Germany; DFKI, Germany; text & form, Germany","nlp/dependency-parsing, nlp/dependency-treebank, nlp/corpus-construction","The supporting raw data was gathered from CommonCrawl, which is a publicly available web crawl created and maintained by the non-profit CommonCrawl foundation.² The data is publicly available in the Amazon cloud both as raw HTML and as plain text. It is collected from a number of independent crawls from 2008 to 2017, and totals petabytes in size. We used cld2³ as the language detection engine because of its speed, available Python bindings and large coverage of languages. Language detection was carried out on the first 1024 bytes of each plaintext document. Deduplication was carried out using hashed document URLs, a simple strategy found in our tests to be effective for coarse duplicate removal. The data for each language was capped at 100,000 tokens per a single input file.","","conll-2017-shared-task","","" "Abu Bakr Soliman, Kareem Eissa, Samhaa El-Beltagy – Nile University, Egypt","AraVec: A set of Arabic Word Embedding Models for use in Arabic NLP","https://www.researchgate.net/publication/319880027_AraVec_A_set_of_Arabic_Word_Embedding_Models_for_use_in_Arabic_NLP","papers","20170101Z00:00:00","","","Nile University, Egypt","nlp/word-embeddings","we have used a subset of the January 2017 crawl dump. The dump contains more than 3.14 billion web pages and about 250 Terabytes of uncompressed content. [...] We used WET files as we were only interested in plain text for building the distributed word representation models. Due to the size of the dump, which requires massive processing power and time for handling, we only used 30\% of the data contained in it. As this subset comprises about one billion web pages (written in multiple language), we believed that it was large enough to provide sufficient Arabic Web pages from which we can build a representative word embeddings model. Here it is important to note that the Common Crawl project does not provide any technique for identifying or selecting the language of web pages to download. So, we had to download data first, and then discard pages that were not written in Arabic. The Arabic detection phase was performed using some regex commands and some NLP techniques to distinguish Arabic from other languages. After the completion of this phase we succeeded in obtaining 4,379,697 Arabic web pages which were then segmented into more than 180,000,000 paragraphs/documents for building our models.","","","","" "Tommy Dean, Ali Pasha, Brian Clarke, Casey J. Butenhoff – Virginia Polytechnic Institute and State University, USA; Eastman Chemical Company; USA","Common Crawl Mining","http://hdl.handle.net/10919/77629","papers","20170101Z00:00:00","","","Virginia Polytechnic Institute and State University, USA; Eastman Chemical Company; USA","information retrieval, market research, business intelligence","The main goal behind the Common Crawl Mining system is to improve Eastman Chemical Company’s ability to use timely knowledge of public concerns to inform key business decisions. It provides information to Eastman Chemical Company that is valuable for consumer chemical product marketing and strategy development. Eastman desired a system that provides insight into the current chemical landscape. Information about trends and sentiment towards chemicals over time is beneficial to their marketing and strategy departments. They wanted to be able to drill down to a particular time period and look at what people were writing about certain keywords. [...] The final Common Crawl Mining system is a search engine implemented using Elasticsearch. Relevant records are identified by first analyzing Common Crawl for Web Archive (WARC) files that have a high frequency of records from interesting domains.","","","","" "Yuheng Du, Alexander Herzog, Andre Luckow, Ramu Nerella, Christopher Gropp, Amy Apon – Clemson University, USA","Representativeness of latent dirichlet allocation topics estimated from data samples with application to common crawl","http://alexherzog.net/files/IEEE_BigData_2017_Representativeness_of_LDA.pdf","papers","20170101Z00:00:00","","","Clemson University, USA","nlp/topic-modeling, nlp/corpus-representativeness","Common Crawl is a massive multi-petabyte dataset hosted by Amazon. It contains archived HTML web page data from 2008 to date. Common Crawl has been widely used for text mining purposes. Using data extracted from Common Crawl has several advantages over a direct crawl of web data, among which is removing the likelihood of a user’s home IP address becoming blacklisted for accessing a given web site too frequently. However, Common Crawl is a data sample, and so questions arise about the quality of Common Crawl as a representative sample of the original data. We perform systematic tests on the similarity of topics estimated from Common Crawl compared to topics estimated from the full data of online forums. Our target is online discussions from a user forum for automotive enthusiasts, but our research strategy can be applied to other domains and samples to evaluate the representativeness of topic models. We show that topic proportions estimated from Common Crawl are not significantly different than those estimated on the full data. We also show that topics are similar in terms of their word compositions, and not worse than topic similarity estimated under true random sampling, which we simulate through a series of experiments. Our research will be of interest to analysts who wish to use Common Crawl to study topics of interest in user forum data, and analysts applying topic models to other data samples.","","","","" "Shalini Ghosh, Phillip Porras, Vinod Yegneswaran, Ken Nitz, Ariyam Das – CSL, SRI International, Menlo Park","ATOL: A Framework for Automated Analysis and Categorization of the Darkweb Ecosystem","https://www.aaai.org/ocs/index.php/WS/AAAIW17/paper/download/15205/14661","papers","20170101Z00:00:00","","","CSL, SRI International, Menlo Park","web-science, information retrieval, nlp/text-classification",".onion references from [...] and an open repository of (non-onion) Web crawling data, called Common Crawl (Common Crawl Foundation 2016).","","","","" "Filip Ginter, Jan Hajič, Juhani Luotolahti, Milan Straka, Daniel Zeman – Charles University, Czech Republic; University of Turku, Finland","CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings","http://hdl.handle.net/11234/1-1989","papers","20170101Z00:00:00","","","Charles University, Czech Republic; University of Turku, Finland","nlp/corpus-construction, nlp/word-embeddings, nlp/syntactic-annotations, nlp/dependency-parsing","Automatic segmentation, tokenization and morphological and syntactic annotations of raw texts in 45 languages, generated by UDPipe (http://ufal.mff.cuni.cz/udpipe), together with word embeddings of dimension 100 computed from lowercased texts by word2vec (https://code.google.com/archive/p/word2vec/). [...] Note that the CC BY-SA-NC 4.0 license applies to the automatically generated annotations and word embeddings, not to the underlying data, which may have different license and impose additional restrictions.","","conll-2017-shared-task","","" "Jakub Kúdela, Irena Holubová, Ondřej Bojar – Charles University, Czech Republic","Extracting Parallel Paragraphs from Common Crawl","https://ufal.mff.cuni.cz/pbml/107/art-kudela-holubova-bojar.pdf","papers","20170101Z00:00:00","","Most of the current methods for mining parallel texts from the web assume that web pages of web sites share same structure across languages. We believe that there still exists a non-negligible amount of parallel data spread across sources not satisfying this assumption. We propose an approach based on a combination of bivec (a bilingual extension of word2vec) and locality-sensitive hashing which allows us to efficiently identify pairs of parallel segments located anywhere on pages of a given web domain, regardless their structure. We validate our method on realigning segments from a large parallel corpus. Another experiment with real-world data provided by Common Crawl Foundation confirms that our solution scales to hundreds of terabytes large set of web-crawled data.","Charles University, Czech Republic","nlp/machine-translation, nlp/corpus-construction","","","","","" "Amir Mehmood, Hafiz Muhammad Shafiq, Abdul Waheed – UET, Lahore, Pakistan","Understanding Regional Context of World Wide Web using Common Crawl Corpus","https://www.researchgate.net/publication/321489200_Understanding_Regional_Context_of_World_Wide_Web_using_Common_Crawl_Corpus","papers","20170101Z00:00:00","","","UET, Lahore, Pakistan","web-science, webometrics","","CC-MAIN-2016-50","","","" "Alexander Panchenko, Eugen Ruppert, Stefano Faralli, Simone Paolo Ponzetto, Chris Biemann – University of Hamburg, Germany; University of Mannheim, Germany","Building a Web-Scale Dependency-Parsed Corpus from CommonCrawl","http://arxiv.org/abs/1710.01779","papers","20170101Z00:00:00","","","University of Hamburg, Germany; University of Mannheim, Germany","nlp/dependency-parsing, nlp/corpus-construction","","CC-MAIN-2016-07","depcc","","" "Ajinkya Kale, Thrivikrama Taula, Sanjika Hewavitharana, Amit Srivastava – eBay Inc.","Towards semantic query segmentation","https://arxiv.org/abs/1707.07835","papers","20170101Z00:00:00","","","eBay Inc.","ir/query-segmentation, nlp/word-embeddings, patent","","","","","GloVe-word-embeddings" "Kjetil Bugge Kristoffersen – University of Oslo, Norway","Common crawled web corpora: constructing corpora from large amounts of web data","http://urn.nb.no/URN:NBN:no-60569","papers","20170101Z00:00:00","","Efforts to use web data as corpora seek to provide solutions to problems traditional corpora suffer from, by taking advantage of the web's huge size and diverse type of content. This thesis will discuss the several sub-tasks that make up the web corpus construction process, like HTML markup removal, language identification, boilerplate removal, duplication detection, etc. Additionally, by using data provided by the Common Crawl Foundation, I develop a new very large English corpus with more than 135 billion tokens. Finally, I evaluate the corpus by training word embeddings and show that the trained model largely outperforms models trained on other corpora in a word analogy and word similarity task.","University of Oslo, Norway","nlp/corpus-construction, nlp/web-as-corpus","","","","","" "David Stuart – University of Wolverhampton, Wolverhampton, UK","Open bibliometrics and undiscovered public knowledge","https://doi.org/10.1108/OIR-07-2017-0209","papers","20170101Z00:00:00","","","University of Wolverhampton, Wolverhampton, UK","web-science/webometrics","Whether altmetrics is really any more open than traditional citation analysis is a matter of debate, although services such as Common Crawl (http://commoncrawl.org), an open repository of web crawl data, provides the opportunity for more open webometrics, [...]","","","","" "Mostafa Abdou, Artur Kulmizev, Vinit Ravishankar, Lasha Abzianidze, Johan Bos – University of Groningen, The Netherlands; University of Copenhagen, Denmark; University of Oslo, Norway;","What can we learn from Semantic Tagging?","https://arxiv.org/abs/1808.09716","papers","20180101Z00:00:00","","","University of Groningen, The Netherlands; University of Copenhagen, Denmark; University of Oslo, Norway;","nlp/semantics, nlp/word-embeddings, nlp/semantic-tagging","","","","GloVe-word-embeddings","" "Ameeta Agrawal, Aijun An, Manos Papagelis – York University, Toronto, Canada","Learning emotion-enriched word representations","https://www.aclweb.org/anthology/C18-1081","papers","20180101Z00:00:00","","Most word representation learning methods are based on the distributional hypothesis in linguistics, according to which words that are used and occur in the same contexts tend to possess similar meanings. As a consequence, emotionally dissimilar words, such as “happy” and “sad” occurring in similar contexts would purport more similar meaning than emotionally similar words, such as “happy” and “joy”. This complication leads to rather undesirable outcome in predictive tasks that relate to affect (emotional state), such as emotion classification and emotion similarity. In order to address this limitation, we propose a novel method of obtaining emotion-enriched word representations, which projects emotionally similar words into neighboring spaces and emotionally dissimilar ones far apart. The proposed approach leverages distant supervision to automatically obtain a large training dataset of text documents and two recurrent neural network architectures for learning the emotion-enriched representations. Through extensive evaluation on two tasks, including emotion classification and emotion similarity, we demonstrate that the proposed representations outperform several competitive general-purpose and affective word representations.","York University, Toronto, Canada","nlp/word-embeddings, nlp/emotion-detection, nlp/sentiment-analysis","","","","GloVe-word-embeddings","" "Manar Alohaly, Hassan Takabi, Eduardo Blanco – University of North Texas, USA","A Deep Learning Approach for Extracting Attributes of ABAC Policies","http://doi.acm.org/10.1145/3205977.3205984","papers","20180101Z00:00:00","access control policy, attribute-based access control, deep learning, natural language processing, policy authoring, relation extraction","","University of North Texas, USA","nlp/machine-translation, computer-security/access-restrictions","","","","","" "Milad Alshomary, Michael Völske, Tristan Licht, Henning Wachsmuth, Benno Stein, Matthias Hagen, Martin Potthast – Paderborn University, Germany; Bauhaus-Universität Weimar, Germany; Martin-Luther-Universität Halle-Wittenberg, Germany; Leipzig University, Germany","Wikipedia text reuse: within and without","https://link.springer.com/chapter/10.1007/978-3-030-15712-8_49","papers","20180101Z00:00:00","","We study text reuse related to Wikipedia at scale by compiling the first corpus of text reuse cases within Wikipedia as well as without (i.e., reuse of Wikipedia text in a sample of the Common Crawl). To discover reuse beyond verbatim copy and paste, we employ state-of-the-art text reuse detection technology, scaling it for the first time to process the entire Wikipedia as part of a distributed retrieval pipeline. We further report on a pilot analysis of the 100 million reuse cases inside, and the 1.6 million reuse cases outside Wikipedia that we discovered. Text reuse inside Wikipedia gives rise to new tasks such as article template induction, fixing quality flaws, or complementing Wikipedia’s ontology. Text reuse outside Wikipedia yields a tangible metric for the emerging field of quantifying Wikipedia’s influence on the web. To foster future research into these tasks, and for reproducibility’s sake, the Wikipedia text reuse corpus and the retrieval pipeline are made freely available.","Paderborn University, Germany; Bauhaus-Universität Weimar, Germany; Martin-Luther-Universität Halle-Wittenberg, Germany; Leipzig University, Germany","web-mining, ir/duplicate-detection","To foster research into Wikipedia textreuse, we compiled the first Wikipedia text reuse corpus, obtained from comparingthe entire Wikipedia to itself as well as to a 10\%-sample of the Common Crawl.","","","","" "Andrei Amatuni, Estelle He, Elika Bergelson – Duke University","Preserved Structure Across Vector Space Representations","https://arxiv.org/abs/1802.00840","papers","20180101Z00:00:00","","","Duke University","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Khaled Ammar, Frank McSherry, Semih Salihoglu, Manas Joglekar – University of Waterloo, Canada; ETH Zürich, Switzerland; Google, Inc.","Distributed Evaluation of Subgraph Queries Using Worstcase Optimal LowMemory Dataflows","https://arxiv.org/pdf/1802.03760.pdf","papers","20180101Z00:00:00","","","University of Waterloo, Canada; ETH Zürich, Switzerland; Google, Inc.","graph-processing","","","","WDC-hyperlinkgraph","" "Khaled Ammar, Frank McSherry, Semih Salihoglu, Manas Joglekar – University of Waterloo, Canada; ETH Zürich, Switzerland; Google, Inc.","Distributed evaluation of subgraph queries using worst-case optimal low-memory dataflows","https://dl.acm.org/citation.cfm?id=3199520","papers","20180101Z00:00:00","","","University of Waterloo, Canada; ETH Zürich, Switzerland; Google, Inc.","graph-processing","","","","WDC-hyperlinkgraph","" "Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E. Dahl, Geoffrey E. Hinton – Google; Google Brain; Google DeepMind","Large scale distributed neural network training through online distillation","https://arxiv.org/abs/1804.03235","papers","20180101Z00:00:00","","","Google; Google Brain; Google DeepMind","nlp/neural-networks","","CC-MAIN-2017-26","","","" "Sajjad Arshad, Seyed Ali Mirheidari, Tobias Lauinger, Bruno Crispo, Engin Kirda, William Robertson – Northeastern University, Boston, MA, USA; University of Trento, Trento, Italy","Large-Scale Analysis of Style Injection by Relative Path Overwrite","https://doi.org/10.1145/3178876.3186090","papers","20180101Z00:00:00","relative path overwrite, scriptless attack, style injection","","Northeastern University, Boston, MA, USA; University of Trento, Trento, Italy","web-science, computer-security/web-application-security","We extract pages using relative-path stylesheets from the Common Crawl dataset [9], automatically test if style directives can be injected using RPO, and determine whether they are interpreted by the browser. [...] For finding the initial seed set of candidate pages with relative-path stylesheets, we leverage the Common Crawl from August 2016, which contains more than 1.6 billion pages. By using an existing dataset, we can quickly identify candidate pages without creating any web crawl traffic. We use a Java HTML parser to filter any pages containing only inline CSS or stylesheets referenced by absolute URLs, leaving us with over 203 million pages on nearly 6 million sites.","CC-MAIN-2016-36","","","" "Mikel Artetxe, Gorka Labaka, Eneko Agirre – University of the Basque Country, Spain","Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations","https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16935/16781","papers","20180101Z00:00:00","","","University of the Basque Country, Spain","nlp/semantics, nlp/word-embeddings, nlp/bilingual-word-embeddings","","","","","" "Mikel Artetxe, Gorka Labaka, Eneko Agirre – University of the Basque Country, Spain","A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings","https://arxiv.org/abs/1805.06297","papers","20180101Z00:00:00","","","University of the Basque Country, Spain","nlp/semantics, nlp/word-embeddings, nlp/bilingual-word-embeddings","","","","WMT-16-translation-task-common-crawl-corpus","" "Mikel Artetxe, Gorka Labaka, Iñigo Lopez-Gazpio, Eneko Agirre – University of the Basque Country, Spain","Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation","https://arxiv.org/abs/1809.02094","papers","20180101Z00:00:00","","","University of the Basque Country, Spain","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings, fastText-word-embeddings","" "Mikel Artetxe, Holger Schwenk – University of the Basque Country, Spain; Facebook AI Research","Margin-based parallel corpus mining with multilingual sentence embeddings","https://arxiv.org/abs/1811.01136","papers","20180101Z00:00:00","","","University of the Basque Country, Spain; Facebook AI Research","cc-cited-not-used, nlp/word-embeddings, nlp/sentence-embeddings, nlp/parallel-corpus","","","","","" "Parnia Bahar, Christopher Brix, Hermann Ney – RWTH Aachen University, Germany","Towards two-dimensional sequence to sequence model in neural machine translation","https://arxiv.org/abs/1810.03975","papers","20180101Z00:00:00","","","RWTH Aachen University, Germany","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Krisztian Balog – University of Stavanger, Norway","Entity-oriented search","https://link.springer.com/content/pdf/10.1007/978-3-319-93935-3.pdf","papers","20180101Z00:00:00","","","University of Stavanger, Norway","information-retrieval, nlp/named-entity-recognition, linked data","Common CrawlCommon Crawl5is a nonprofit organization that regularly crawlsthe Web and makes the data publicly available. The datasets are hosted on AmazonS3 as part of the Amazon Public Datasets program.6As of May 2017, the crawlcontains 2.96 billion web pages and over 250 TB of uncompressed content (inWARC format). The Web Data Commons project7extracts structured data fromthe Common Crawl and makes those publicly available (e.g., the Hyperlink GraphDataset and the Web Table Corpus).","CC-MAIN-2017-22","","","" "Luciano Barbosa, Valter Crescenzi, Xin Luna Dong, Paolo Merialdo, Federico Piai, Disheng Qiu, Yanyan Shen, Divesh Srivastava – Universidade Federal de Pernambuco, Brazil; Roma Tre University, Italy; Amazon; Wanderio; Shanghai Jiao Tong University; AT&T Labs – Research","Big Data Integration for Product Specifications.","http://sites.computer.org/debull/A18june/A18JUN-CD.pdf#page=73","papers","20180101Z00:00:00","","","Universidade Federal de Pernambuco, Brazil; Roma Tre University, Italy; Amazon; Wanderio; Shanghai Jiao Tong University; AT&T Labs – Research","ir/information-extraction, ir/data-integration","About 68\% of the sources discovered by our approach were not present in Common Crawl. Only 20\% of our sources contained fewer pages than the same sources in Common Crawl, and a very small fraction of the pages in these sources were product pages: on a sample set of 12 websites where Common Crawl presented more pages than in our dataset, we evaluated that only 0.8\% of the pages were product pages.","","","","" "Luciano Barbosa, Valter Crescenzi, Xin Luna Dong, Paolo Merialdo, Federico Piai, Disheng Qiu, Yanyan Shen, Divesh Srivastava – Universidade Federal de Pernambuco, Brazil; Roma Tre University, Italy; Amazon; Wanderio; Shanghai Jiao Tong University; AT&T Labs – Research","Lessons Learned and Research Agenda for Big Data Integration of Product Specifications (Discussion Paper)","http://ceur-ws.org/Vol-2161/paper29.pdf","papers","20180101Z00:00:00","","","Universidade Federal de Pernambuco, Brazil; Roma Tre University, Italy; Amazon; Wanderio; Shanghai Jiao Tong University; AT&T Labs – Research","ir/information-extraction, ir/data-integration","Building a Benchmark Product Dataset – We compared the contents of our dataset with pages in Common Crawl, an open repository of web crawl data. About 68\% of the sources discovered by our approach were not present in Common Crawl. Only 20\% of our sources contained fewer pages than the same sources in Common Crawl, and a very small fraction of the pages in these sources were product pages: on a sample set of 12 websites where Common Crawl presented more pages than in our dataset, we evaluated that only 0.8\% of the pages were product pages.","","","","" "Michail Batikas, Jörg Claussen, Christian Peukert – LMU Munich, Germany; UCP – Católica Lisbon School of Business and Economics, Lisboa, Portugal","Follow The Money: Online Piracy and Self-Regulation in the Advertising Industry","http://www.cesifo-group.de/DocDL/cesifo1_wp6852.pdf","papers","20180101Z00:00:00","","","LMU Munich, Germany; UCP – Católica Lisbon School of Business and Economics, Lisboa, Portugal","web-science","We obtain archived versions of the HTML source code of all URLs for each domain in our gross sample from Common Crawl, a project that has crawled billions of webpages periodically since summer 2013.","","","","" "Leilani Battle, Peitong Duan, Zachery Miranda, Dana Mukusheva, Remco Chang, Michael Stonebraker – University of Washington, Seattle, WA, USA; Massachusetts Institute of Technology, Cambridge, MA, USA; Tufts University, Medford, MA, USA","Beagle: Automated Extraction and Interpretation of Visualizations from the Web","https://dl.acm.org/citation.cfm?id=3174168","papers","20180101Z00:00:00","","``How common is interactive visualization on the web?'' ``What is the most popular visualization design?'' ``How prevalent are pie charts really?'' These questions intimate the role of interactive visualization in the real (online) world. In this paper, we present our approach (and findings) to answering these questions. First, we introduce Beagle, which mines the web for SVG-based visualizations and automatically classifies them by type (i.e., bar, pie, etc.). With Beagle, we extract over 41,000 visualizations across five different tools and repositories, and classify them with 85\% accuracy, across 24 visualization types. Given this visualization collection, we study usage across tools. We find that most visualizations fall under four types: bar charts, line charts, scatter charts, and geographic maps. Though controversial, pie charts are relatively rare for the visualization tools that were studied. Our findings also suggest that the total visualization types supported by a given tool could factor into its ease of use. However this effect appears to be mitigated by providing a variety of diverse expert visualization examples to users.","University of Washington, Seattle, WA, USA; Massachusetts Institute of Technology, Cambridge, MA, USA; Tufts University, Medford, MA, USA","web-science, web-crawling","As found with other web crawling projects, such as the Common Crawl¹, our web crawls represent a specific point in time for the websites [...]","","","","" "Luigi Bellomarini, Ruslan R Fayzrakhmanov, Georg Gottlob, Andrey Kravchenko, Eleonora Laurenza, Yavor Nenov, Stephane Reissfelder, Emanuel Sallinger, Evgeny Sherkhonov, Lianlong Wu – University of Oxford, United Kingdom; Banca d’Italia, Italy; TU Wien, Austria","Data Science with Vadalog: Bridging Machine Learning and Reasoning","https://arxiv.org/abs/1807.08712","papers","20180101Z00:00:00","","","University of Oxford, United Kingdom; Banca d’Italia, Italy; TU Wien, Austria","ai/semantic-reasoning, ai/machine-learning","Enterprises increasingly depend on intelligent information systems that operationalise corporate knowledge as a unified source across system boundaries. [...] To maintain their competitive edge, companies need to incorporate multiple heterogeneous sources of information, including [...] external streams of unstructured data (e.g., news and social media feeds, and Common Crawl¹), [...]","","","","" "Luisa Bentivogli, Mauro Cettolo, Marcello Federico, Federmann Christian – FBK, Trento, Italy; Amazon AI, East Palo Alto, CA, USA, Microsoft Cloud+AI, Redmond, WA, USA","Machine Translation Human Evaluation: an investigation of evaluation based on Post-Editing and its relation with Direct Assessment","https://workshop2018.iwslt.org/downloads/Proceedings_IWSLT_2018.pdf#page=77","papers","20180101Z00:00:00","","","FBK, Trento, Italy; Amazon AI, East Palo Alto, CA, USA, Microsoft Cloud+AI, Redmond, WA, USA","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Janek Bevendorff, Benno Stein, Matthias Hagen, Martin Potthast – Bauhaus-Universität Weimar, Germany; Leipzig University, Germany","Elastic ChatNoir: Search Engine for the ClueWeb and the Common Crawl","https://doi.org/10.1007/978-3-319-76941-7_83","papers","20180101Z00:00:00","","","Bauhaus-Universität Weimar, Germany; Leipzig University, Germany","information-retrieval/search-engine","","CC-MAIN-2015-11","","","" "Paolo Boldi, Andrea Marino, Massimo Santini, Sebastiano Vigna – Università degli Studi di Milano, Italy","BUbiNG: Massive crawling for the masses","https://dl.acm.org/citation.cfm?id=3160017","papers","20180101Z00:00:00","","","Università degli Studi di Milano, Italy","web-crawling, web-science/hyperlinkgraph","","","","","WDC-hyperlinkgraph" "Fabienne Braune, Alex Fraser, Barry Haddow – University of Edinburgh","D1. 2: Report on Improving Translation with Monolingual Data","http://www.himl.eu/files/D1.2_Using_Non_Parallel.pdf","papers","20180101Z00:00:00","","","University of Edinburgh","nlp/machine-translation","","","","","" "Tomáš Brychcín, Tomáš Hercig, Josef Steinberger, Michal Konkol – University of West Bohemia, Czech Republic","UWB at SemEval-2018 Task 10: Capturing Discriminative Attributes from Word Distributions","http://www.aclweb.org/anthology/S18-1153","papers","20180101Z00:00:00","","","University of West Bohemia, Czech Republic","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Michael Cafarella, Alon Halevy, Hongrae Lee, Jayant Madhavan, Cong Yu, Daisy Zhe Wang, Eugene Wu – Google Inc.; University of Michigan, USA; Megagon Labs; University of Florida, USA; Columbia University, USA","Ten years of webtables","https://dl.acm.org/citation.cfm?id=3275614","papers","20180101Z00:00:00","","","Google Inc.; University of Michigan, USA; Megagon Labs; University of Florida, USA; Columbia University, USA","semantic web, web tables, web-mining","Several researchers produced web tables from the public Common Crawl [1, 24, 15], thereby making them available to a broad audience outside the large Web companies.","","","","WDCWebTables, DresdenWebTableCorpus" "Casey Casalnuovo, Kenji Sagae, Prem Devanbu – University of California, Davis, USA","Studying the Difference Between Natural and Programming Language Corpora","https://link.springer.com/article/10.1007/s10664-018-9669-7","papers","20180101Z00:00:00","","","University of California, Davis, USA","nlp/corpus-construction, nlp/text-corpora, programming-languages, nlp/syntax","The Germanand Spanish corpora were selected from a sample of files from the unlabeled datasets from the ConLL 2017 Shared Task (Ginter et al, 2017), which consist of web text obtained from CommonCrawl.⁸ Like the 1 billion token English corpus, we selected a random subsample to make these corpora size comparable with our other corpora. In this sample, we excluded files from the Wikipedia translations, as we observed Wikipedia formatting mixed in with some of the files.","","","conll-2017-shared-task","" "Xinghan Chen, Mingxing Zhang, Zheng Wang, Lin Zuo, Bo Li, Yang Yang – University of Electronic Science and Technology of China (UESTC), Chengdu, PR China","Leveraging Unpaired Out-of-Domain Data for Image Captioning","https://www.sciencedirect.com/science/article/abs/pii/S0167865518309358","papers","20180101Z00:00:00","","","University of Electronic Science and Technology of China (UESTC), Chengdu, PR China","nlp/text-generation, ai/image-classification, nlp/image-captioning, ai/deep-learning","","","","","" "Zewen Chi, Heyan Huang, Jiangui Chen, Hao Wu, Ran Wei – Beijing Institute of Technology, China","Zewen at SemEval-2018 Task 1: An Ensemble Model for Affect Prediction in Tweets","http://www.aclweb.org/anthology/S18-1046","papers","20180101Z00:00:00","","","Beijing Institute of Technology, China","nlp, nlp/sentiment-analysis, nlp/emotion-detection, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Mara Chinea-Rios, Alvaro Peris, Francisco Casacuberta – Universitat d'Alacant, Spain","Are Automatic Metrics Robust and Reliable in Specific Machine Translation Tasks?","http://rua.ua.es/dspace/handle/10045/76022","papers","20180101Z00:00:00","","","Universitat d'Alacant, Spain","nlp/machine-translation","In our setup, we trained a PB-SMT and a NMT system on the same data, from a general corpus extracted from websites (Common Crawl).","","","","" "Shamil Chollampatt, Hwee Tou Ng – NUS Graduate School for Integrative Sciences and Engineering; Department of Computer Science, National University of Singapore","A multilayer convolutional encoder-decoder neural network for grammatical error correction","https://arxiv.org/abs/1801.08831","papers","20180101Z00:00:00","","","NUS Graduate School for Integrative Sciences and Engineering; Department of Computer Science, National University of Singapore","nlp/grammatical-error-correction, nlp/word-embeddings, nlp/language-model","We also make use of the larger English corpora from Wikipedia (1.78B words) for pre-training the word embeddings, and a subset of the Common Crawl corpus (94B words) for training the language model for rescoring.","","","","" "Kenneth Clarkson, Anna Lisa Gentile, Daniel Gruhl, Petar Ristoski, Joseph Terdiman, Steve Welch – IBM Research Almaden, San Jose, USA","User-Centric Ontology Population","https://link.springer.com/chapter/10.1007/978-3-319-93417-4_8","papers","20180101Z00:00:00","","","IBM Research Almaden, San Jose, USA","semantic web, cc-cited-not-used, ontology extraction","","","","","" "Trevor Cohen, Dominic Widdows – University of Washington, Seattle, USA; Grab, Inc., Seattle, WA, USA","Bringing Order to Neural Word Embeddings with Embeddings Augmented by Random Permutations (EARP)","http://www.aclweb.org/anthology/K18-1045","papers","20180101Z00:00:00","","","University of Washington, Seattle, USA; Grab, Inc., Seattle, WA, USA","nlp/word-embeddings, cc-cited-not-used","","","","","" "Alexis Conneau, Douwe Kiela – Facebook Artificial Intelligence Research","SentEval: An evaluation toolkit for universal sentence representations","https://arxiv.org/abs/1803.05449","papers","20180101Z00:00:00","","","Facebook Artificial Intelligence Research","nlp/word-embeddings, nlp/sentence-embeddings, nlp/evaluation","","","","GloVe-word-embeddings, fastText-word-embeddings","" "Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R Bowman, Holger Schwenk, Veselin Stoyanov – Facebook AI Research, USA; New York University, USA","XNLI: Evaluating Cross-lingual Sentence Representations","https://arxiv.org/abs/1809.05053","papers","20180101Z00:00:00","","","Facebook AI Research, USA; New York University, USA","nlp/word-embeddings, nlp/sentence-embeddings","","","","fasttext-word-embeddings","" "Michael Conover, Matthew Hayes, Scott Blackburn, Pete Skomoroch, Sam Shah – Workday, Inc., San Francisco, CA, USA","Pangloss: Fast Entity Linking in Noisy Text Environments","https://dl.acm.org/citation.cfm?id=3219899","papers","20180101Z00:00:00","","","Workday, Inc., San Francisco, CA, USA","ir/information-extraction","The Common Crawl datasets represents a sample of web crawl data containing raw web page data, metadata and text extracts overseen by a 501(c)(3) nonprofit of the same name. Facilitating ease of access for industrial practitioners, the dataset is hosted for free on Amazon Web Services’ Public Data Set repository in addition to academic hosts the world over. As part of a batch Hadoop job run on a monthly basis we filter the Common Crawl data (∼70TB) down to records which contain at least one hyperlink that points to English Wikipedia. This corpus has proven particularly valuable as a source of signal for associating tokens with knowledge base entries in the context of domain-specific, messy natural language.","","","","" "Andreiwid Sheffer Correa, Pär-Ola Zander, Flavio Soares Correa da Silva – University of Sao Paulo, Sao Paulo, Brazil; Aalborg University, Aalborg, Denmark","Investigating open data portals automatically: a methodology and some illustrations","https://dl.acm.org/citation.cfm?id=3209292","papers","20180101Z00:00:00","","","University of Sao Paulo, Sao Paulo, Brazil; Aalborg University, Aalborg, Denmark","open data, information retrieval","","","","","" "J Shane Culpepper, Fernando Diaz, Mark D. Smucker – ACM","Research Frontiers in Information Retrieval: Report from the Third Strategic Workshop on Information Retrieval in Lorne (SWIRL 2018)","http://doi.acm.org/10.1145/3274784.3274788","papers","20180101Z00:00:00","","","ACM","cc-cited-not-used, information-retrieval","","","","","" "Alexander Czech – TU Wien, Austria","An Approach to Geotag a Web Sized Corpus of Documents with Addresses in Randstad, Netherlands","https://doi.org/10.3929/ethz-b-000225615","papers","20180101Z00:00:00","","","TU Wien, Austria","ir/geotagging","Common Crawl is a non-profit organization that provides raw web crawling data on a monthly basis. Their archives contain over 3.16 billion URLs with over 260 TiB of uncompressed content.","","","","" "Berkan Demirel, Ramazan Gokberk Cinbis, Nazli Ikizler-Cinbis – HAVELSAN Inc. Ankara, Turkey; Middle East Technical University Ankara, Turkey; Hacettepe University Ankara, Turkey","Zero-Shot Object Detection by Hybrid Region Embedding","https://arxiv.org/abs/1805.06157","papers","20180101Z00:00:00","","","HAVELSAN Inc. Ankara, Turkey; Middle East Technical University Ankara, Turkey; Hacettepe University Ankara, Turkey","ai/computer-vision, ai/pattern-recognition, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Pavel Denisov, Ngoc Thang Vu, Marc Ferras Font – University of Stuttgart, Germany","Unsupervised Domain Adaptation by Adversarial Learning for Robust Speech Recognition","https://arxiv.org/abs/1807.11284","papers","20180101Z00:00:00","","","University of Stuttgart, Germany","nlp, speech-recognition","..., 197 millions words of Italian Deduplicated CommonCrawl Text are used to build Italian language model.","","","","" "Sunipa Dev, Safia Hassan, Jeff M Phillips – University of Utah","Absolute Orientation for Word Embedding Alignment","https://arxiv.org/abs/1806.01330","papers","20180101Z00:00:00","","","University of Utah","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Sergey Edunov, Myle Ott, Michael Auli, David Grangier – Facebook AI Research, USA; Google Brain, Mountain View, CA, USA","Understanding Back-Translation at Scale","https://arxiv.org/abs/1808.09381","papers","20180101Z00:00:00","","","Facebook AI Research, USA; Google Brain, Mountain View, CA, USA","nlp/machine-translation","","","","","" "Julia Efremova, Ian Endres, Isaac Vidas, Ofer Melnik – HERE Technologies, Amsterdam, The Netherlands","A Geo-Tagging Framework for Address Extraction from Web Pages","https://link.springer.com/chapter/10.1007/978-3-319-95786-9_22","papers","20180101Z00:00:00","","","HERE Technologies, Amsterdam, The Netherlands","semantic-web/microformats","Common Crawl is a public corpus, mostly stored on Amazon Web Services³. A subset of the CommonCrawl dataset has schema information in the microdata format","","","","" "Samer El Zant, Katia Jaffrès-Runser, Klaus M. Frahm, Dima L. Shepelyansky – Université de Toulouse, France","Interactions and influence of world painters from the reduced Google matrix of Wikipedia networks","https://ieeexplore.ieee.org/abstract/document/8449078","papers","20180101Z00:00:00","","This paper concentrates on extracting painting art history knowledge from the network structure of Wikipedia. Therefore, we construct theoretical networks of webpages representing the hyper-linked structure of articles of seven Wikipedia language editions. These seven networks are analyzed to extract the most influential painters in each edition using Google matrix theory. Importance of webpages of over 3000 painters is measured using the PageRank algorithm. The most influential painters are enlisted and their ties are studied with the reduced Google matrix analysis. The reduced Google matrix is a powerful method that captures both direct and hidden interactions between a subset of selected nodes taking into account the indirect links between these nodes via the remaining part of large global network. This method originates from the scattering theory of nuclear and mesoscopic physics and field of quantum chaos. In this paper, we show that it is possible to extract from the components of the reduced Google matrix meaningful information on the ties between these painters. For instance, our analysis groups together painters that belong to the same painting movement and shows meaningful ties between painters of different movements. We also determine the influence of painters on world countries using link sensitivity between Wikipedia articles of painters and countries. The reduced Google matrix approach allows to obtain a balanced view of various cultural opinions of Wikipedia language editions. The world countries with the largest number of top painters of selected seven Wikipedia editions are found to be Italy, France, and Russia. We argue that this approach gives meaningful information about art and that it could be a part of extensive network analysis on human knowledge and cultures.","Université de Toulouse, France","web-science/hyperlinkgraph, graph-processing, cc-cited-not-used","","","","","" "Cristina Espana-Bonet, Juliane Stiller, Sophie Henning – Universität des Saarlandes, Germany; Humboldt-Universität zu Berlin, Germany","M1. 2--Corpora for the Machine Translation Engines","https://www.clubs-project.eu/assets/publications/project/M1.2_MTcorpora_v4.0.pdf","papers","20180101Z00:00:00","","","Universität des Saarlandes, Germany; Humboldt-Universität zu Berlin, Germany","nlp/machine-translation, nlp/corpora","","","","","WMT-13-translation-task-common-crawl-corpus" "Diego Esteves, Aniketh Janardhan Reddy, Piyush Chawla, Jens Lehmann – University of Bonn, Germany; University of Ohio, USA; Carnegie Mellon University, Pittsburgh, USA;","Belittling the Source: Trustworthiness Indicators to Obfuscate Fake News on the Web","https://arxiv.org/abs/1809.00494","papers","20180101Z00:00:00","","","University of Bonn, Germany; University of Ohio, USA; Carnegie Mellon University, Pittsburgh, USA;","nlp, text classification, content credibility, information retrieval","PageRankCC: PageRank information computed through the CommonCrawl Corpus","","","","" "Stefano Faralli, Els Lefever, Simone Paolo Ponzetto – University of Mannheim, Germany; Ghent University, Belgium","MIsA: Multilingual IsA Extraction from Corpora","https://biblio.ugent.be/publication/8562721","papers","20180101Z00:00:00","","","University of Mannheim, Germany; Ghent University, Belgium","nlp/semantics, data-mining, hypernymy","","","","","WDC-WebIsADb" "Ruslan R. Fayzrakhmanov, Emanuel Sallinger, Ben Spencer, Tim Furche, Georg Gottlob – University of Oxford, Oxford, United Kingdom","Browserless web data extraction: challenges and opportunities","https://dl.acm.org/citation.cfm?id=3186008","papers","20180101Z00:00:00","","","University of Oxford, Oxford, United Kingdom","information retrieval, web-crawling, web-scraping, web-mining","The random sites were chosen by randomly sampling URLs from the Common Crawl [10] search index dataset, which includes around 3 billion web pages.","","","","" "Agostino Funel – ENEA, Italy","Analysis of the Web Graph Aggregated by Host and Pay-Level Domain","https://arxiv.org/abs/1802.05435","papers","20180101Z00:00:00","","","ENEA, Italy","web-science/hyperlinkgraph","","hyperlinkgraph/cc-main-2017-aug-sep-oct/hostgraph, hyperlinkgraph/cc-main-2017-aug-sep-oct/domaingraph","","","" "Andres Garcia, Jose Manuel Gomez-Perez – expertsystem.com, Madrid, Spain","Not just about size-A Study on the Role of Distributed Word Representations in the Analysis of Scientific Publications","https://arxiv.org/abs/1804.01772","papers","20180101Z00:00:00","","","expertsystem.com, Madrid, Spain","nlp/word-embeddings","","","","fastText-word-embeddings, GloVe-word-embeddings","" "Andres Garcia, Jose Manuel Gomez-Perez – expertsystem.com, Madrid, Spain","Not just about size-A Study on the Role of Distributed Word Representations in the Analysis of Scientific Publications","http://ceur-ws.org/Vol-2106/paper3.pdf","papers","20180101Z00:00:00","","","expertsystem.com, Madrid, Spain","nlp/word-embeddings","","","","fastText-word-embeddings, GloVe-word-embeddings","" "Nikhil Garg, Londa Schiebinger, Dan Jurafsky, James Zou – Stanford University, USA; Chan Zuckerberg Biohub, San Francisco, CA, USA","Word embeddings quantify 100 years of gender and ethnic stereotypes","https://www.pnas.org/content/115/16/E3635.short","papers","20180101Z00:00:00","","","Stanford University, USA; Chan Zuckerberg Biohub, San Francisco, CA, USA","nlp/semantics, nlp/word-embeddings, ai/ethics-of-machine-learning, ai/machine-learning","","","","GloVe-word-embeddings","" "Majid Ghasemi-Gol, Pedro Szekely – University of Southern California; Information Science Institute","TabVec: Table Vectors for Classification of Web Tables","https://arxiv.org/abs/1802.06290","papers","20180101Z00:00:00","","","University of Southern California; Information Science Institute","web-tables, information-extraction","[...] we use a random sample of July 2015 Common Crawl (WCC) as a generic domain to compare our system with the state of the art systems","CC-MAIN-2015-32","","","WDCWebTables, DresdenWebTableCorpus" "Michael Glass, Alfio Gliozzo – IBM Research AI","Discovering Implicit Knowledge with Unary Relations","http://www.aclweb.org/anthology/P18-1147","papers","20180101Z00:00:00","","","IBM Research AI","ai/knowledge-base","","","","","" "Michael Glass, Alfio Gliozzo – Knowledge Induction and Reasoning Group, IBM Research AINew YorkUSA","A Dataset for Web-Scale Knowledge Base Population","https://link.springer.com/chapter/10.1007/978-3-319-93417-4_17","papers","20180101Z00:00:00","","","Knowledge Induction and Reasoning Group, IBM Research AINew YorkUSA","ai/semantic-reasoning, ai/knowledge-base","We introduce and release CC-DBP, a web-scale dataset for training and benchmarking KBP systems. The dataset is based on Common Crawl as the corpus and DBpedia as the target knowledge base [...]","CC-MAIN-2017-26","CC-DBP","","" "Michael Glass, Alfio Gliozzo, Oktie Hassanzadeh, Nandana Mihindukulasooriya, Gaetano Rossiello – IBM Research AI, New York, USA; Universidad Politcnica de Madrid, Spain; University of Bari, Italy","Inducing implicit relations from text using distantly supervised deep nets","https://link.springer.com/chapter/10.1007/978-3-030-00671-6_3","papers","20180101Z00:00:00","","","IBM Research AI, New York, USA; Universidad Politcnica de Madrid, Spain; University of Bari, Italy","ai/knowledge-base, ai/deep-learning, semantic web","","","","CC-DBP","" "Pranav Goel, Yoichi Matsuyama, Michael Madaio, Justine Cassell – Indian Institute of Technology (BHU), India; Carnegie Mellon University","“I think it might help if we multiply, and not add”: Detecting Indirectness in Conversation","http://articulab.hcii.cs.cmu.edu/wordpress/wp-content/uploads/2018/04/Goel-IWSDS2018_camera-ready_13Mar.pdf","papers","20180101Z00:00:00","","","Indian Institute of Technology (BHU), India; Carnegie Mellon University","nlp/dialogue-systems, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Viktor Golem, Mladen Karan, Jan Šnajder – University of Zagreb, Croatia","Combining Shallow and Deep Learning for Aggressive Text Detection","www.aclweb.org/anthology/W18-4422","papers","20180101Z00:00:00","","","University of Zagreb, Croatia","nlp/text-classification, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Paul Gooding, Melissa Terras, Linda Berube – University of East Anglia, United Kingdom; University of Edinburgh, United Kingdom","Legal Deposit Web Archives and the Digital Humanities: A Universe of Lost Opportunity?","http://eprints.gla.ac.uk/168229/","papers","20180101Z00:00:00","","","University of East Anglia, United Kingdom; University of Edinburgh, United Kingdom","web-archiving/legal-aspects","Restricted deposit library access requires researchers to look elsewhere for portable web data: by undertaking their own web crawls, or by utilising datasets from Common Crawl (http://commoncrawl.org/) and the Internet Archive (https://archive.org). Both organisations provide vital services to researchers, and both innovate in areas that would traditionally fall under the deposit libraries’ purview. They support their mission by exploring the boundaries of copyright, including exceptions for non-commercial text and data mining (Intellectual Property Office, 2014). This contrast between risk-enabled independent organisations and deposit libraries, described by interviewees as risk averse, challenges library/DH collaboration models such as BL Labs (http://labs.bl.uk) and Library of Congress Labs (https://labs.loc.gov).","","","","" "Rajendra Banjade, Nabin Maharjan, Dipesh Gautam, Frank Adrasik, Arthur C. Graesser, Vasile Rus – University of Memphis, USA","Pooling Word Vector Representations Across Models","https://www.springer.com/de/book/9783319771151","papers","20180101Z00:00:00","","","University of Memphis, USA","nlp/word-embeddings, nlp/semantics","","","","GloVe-word-embeddings","" "Gabriel Grand, Idan Asher Blank, Francisco Pereira, Evelina Fedorenko – Harvard University; Massachusetts Institute of Technology; Siemens Healthineers; Massachusetts General Hospital; Harvard Medical School","Semantic projection: recovering human knowledge of multiple, distinct object features from word embeddings","https://arxiv.org/abs/1802.01241","papers","20180101Z00:00:00","","","Harvard University; Massachusetts Institute of Technology; Siemens Healthineers; Massachusetts General Hospital; Harvard Medical School","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, Tomas Mikolov – Facebook AI Research; École polytechnique fédérale de Lausanne EPFL, Switzerland","Learning word vectors for 157 languages","https://www.aclweb.org/anthology/L18-1550","papers","20180101Z00:00:00","","Distributed word representations, or word vectors, have recently been applied to many tasks in natural language processing, leading to state-of-the-art performance. A key ingredient to the successful application of these representations is to train t hem on very large corpora, and use these pre-trained models in downstream tasks. In this paper, we describe how we trained such high qualit y word representations for 157 languages. We used two sources of data to train these models: the free online encyclopedia Wikip edia and data from the common crawl project. We also introduce three new word analogy datasets to evaluate these word vectors, for Fren ch, Hindi and Polish. Finally, we evaluate our pre-trained word vectors on 10 languages for which evaluation datasets exists, sho wing very strong performance compared to previous models.","Facebook AI Research; École polytechnique fédérale de Lausanne EPFL, Switzerland","nlp/word-embeddings","The common crawl is a non profit organization which crawls the web and makes the resulting data publicly available. This large scale corpus was previously used to estimate n-gram language models (Buck et al., 2014) or to learn English word vectors (Pennington et al., 2014). To the best of our knowledge, it was not used yet to learn word vectors for a large set of languages. The data is distributed either as raw HTML pages, or as WET files which contain the extracted text data, converted to UTF-8. We decided to use the extracted text data, as it is much smaller in size, and easier to process (no need to remove HTML). We downloaded the May 2017 crawl, corresponding to roughly 24 terabytes of raw text data.","CC-MAIN-2017-22 (WET)","fastText-word-embeddings","","" "Roman Grundkiewicz, Marcin Junczys-Dowmunt – University of Edinburgh, United Kingdom; Microsoft","Near Human-Level Performance in Grammatical Error Correction with Hybrid Machine Translation","https://arxiv.org/abs/1804.05945","papers","20180101Z00:00:00","","","University of Edinburgh, United Kingdom; Microsoft","nlp/machine-translation, nlp/grammatical-error-correction","","","","Ngrams-LMs-2013","" "Amir Hazem, Emmanuel Morin – Université de Nantes, France","Leveraging Meta-Embeddings for Bilingual Lexicon Extraction from Specialized Comparable Corpora","http://www.aclweb.org/anthology/C18-1080","papers","20180101Z00:00:00","","","Université de Nantes, France","nlp/machine-translation, nlp/lexikon, nlp/dictionary-creation","","","","","" "Michael A. Hedderich, Dietrich Klakow – Saarland University, Saarbrücken, Germany","Training a Neural Network in a Low-Resource Setting on Automatically Annotated Noisy Data","https://arxiv.org/abs/1807.00745","papers","20180101Z00:00:00","","","Saarland University, Saarbrücken, Germany","nlp/word-embeddings, ai/neural-networks","","","","GloVe-word-embeddings","" "Lena Hettinger, Alexander Dallmann, Albin Zehe, Thomas Niebler, Andreas Hotho – University of Würzburg, Germany","ClaiRE at SemEval-2018 Task 7: Classification of Relations using Embeddings","http://www.aclweb.org/anthology/S18-1134","papers","20180101Z00:00:00","","","University of Würzburg, Germany","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Lena Hettinger, Alexander Dallmann, Albin Zehe, Thomas Niebler, Andreas Hotho – University of Würzburg, Germany","ClaiRE at SemEval-2018 Task 7-Extended Version","https://arxiv.org/abs/1804.05825","papers","20180101Z00:00:00","","","University of Würzburg, Germany","nlp/semantics, nlp/word-embeddings","we employ a publicly available set of 300-dimensional word embeddings trained with GloVe (Pennington et al., 2014) on the Common Crawl data","","","","" "Jiaji Huang, Yi Li, Wei Ping, Liang Huang – Baidu Research, Sunnyvale, CA, USA; School of EECS, Oregon State University, Corvallis, OR, USA","Large Margin Neural Language Model","https://arxiv.org/abs/1808.08987","papers","20180101Z00:00:00","","","Baidu Research, Sunnyvale, CA, USA; School of EECS, Oregon State University, Corvallis, OR, USA","nlp/language-model, nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Balázs Indig – MTA-PPKE Magyar Nyelvtechnológiai Kutatócsoport, Hungaria","Közös crawlnak is egy korpusz a vége-Korpuszépítés a CommonCrawl .hu domainjából","http://real.mtak.hu/73329/1/crawl.pdf","papers","20180101Z00:00:00","","","MTA-PPKE Magyar Nyelvtechnológiai Kutatócsoport, Hungaria","web-science","","CC-MAIN-2017-47","","","" "Mohit Iyyer, John Wieting, Kevin Gimpel, Luke Zettlemoyer – Allen Institute of Artificial Intelligence, Seattle, United States; UMass Amherst, United States; Carnegie Mellon University, Pittsburgh, PA, USA; Toyota Technological Institute at Chicago, IL, USA; University of Washington, Seattle, WA, USA","Adversarial example generation with syntactically controlled paraphrase networks","https://arxiv.org/abs/1804.06059","papers","20180101Z00:00:00","","","Allen Institute of Artificial Intelligence, Seattle, United States; UMass Amherst, United States; Carnegie Mellon University, Pittsburgh, PA, USA; Toyota Technological Institute at Chicago, IL, USA; University of Washington, Seattle, WA, USA","nlp/machine-translation, nlp/sentence-paraphrase, nlp/sentence-embeddings","","","WMT-16-translation-task-common-crawl-corpus, patent","","" "Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, Edouard Grave – Facebook AI Research","Loss in translation: Learning bilingual word mapping with a retrieval criterion","https://www.aclweb.org/anthology/papers/D/D18/D18-1330/","papers","20180101Z00:00:00","","","Facebook AI Research","nlp/word-embeddings, nlp/bilingual-word-embeddings","","","","fastText-word-embeddings","" "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, Kenneth Heafield – University of Edinburgh, United Kingdom; Microsoft","Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task","https://arxiv.org/abs/1804.05940","papers","20180101Z00:00:00","","","University of Edinburgh, United Kingdom; Microsoft","nlp/machine-translation, nlp/grammatical-error-correction","","","","","" "David Jurgens, Srijan Kumar, Raine Hoover, Dan McFarland, Dan Jurafsky – University of Michigan, USA; Stanford University, USA","Measuring the evolution of a scientific field through citation frames","https://doi.org/10.1162/tacl_a_00028","papers","20180101Z00:00:00","","","University of Michigan, USA; Stanford University, USA","nlp/word-embeddings, nlp/text-analysis, nlp/citation-analysis","","","","","GloVe-word-embeddings" "Tomer Kaftan, Magdalena Balazinska, Alvin Cheung, Johannes Gehrke – University of Washington; Microsoft","Cuttlefish: A Lightweight Primitive for Adaptive Query Processing","https://arxiv.org/abs/1802.09180","papers","20180101Z00:00:00","","","University of Washington; Microsoft","information retrieval, regular expression matching, query planning, SQL processing","... to search through a contiguously-stored sample of approximately 256 thousand internet web pages collected by the Common Crawl project.","","","","" "Alexander Kagoshima, Kai Londenberg, Fang Xu – Searchmetrics GmbH","Determination of content score","https://patents.google.com/patent/US20180121430A1/en","papers","20180101Z00:00:00","","","Searchmetrics GmbH","patent, cc-cited-not-used","The crawler module [310] may automatically crawl a network and acquire contents from one or more resources in the network, acquire the contents from an open repository of web crawl data such as CommonCrawl.org.","","","","" "Ajinkya Gorakhnath Kale, Thrivikrama Taula, Amit Srivastava, Sanjika Hewavitharana – eBay Inc.","Methods and systems for query segmentation","https://patents.google.com/patent/US20180329999A1/en","papers","20180101Z00:00:00","","","eBay Inc.","ir/query-segmentation, nlp/word-embeddings, patent","","","","","GloVe-word-embeddings" "Kokas Károly, Drótos László – Országos Széchényi Könyvtár, Hungary; SZTE Klebelsberg Könyvtár, Hungary","Webarchiválás és a történeti kutatások / Web Archiving and Historical Research","http://ojs.elte.hu/index.php/digitalisbolcseszet/article/view/129","papers","20180101Z00:00:00","","","Országos Széchényi Könyvtár, Hungary; SZTE Klebelsberg Könyvtár, Hungary","web-archiving, cc-cited-not-used","","","","","" "Issa M. Khalil, Bei Guan, Mohamed Nabeel, Ting Yu – Qatar Computing Research Institute, Doha, Qatar","A domain is only as good as its buddies: detecting stealthy malicious domains via graph inference","https://dl.acm.org/citation.cfm?id=3176329","papers","20180101Z00:00:00","","","Qatar Computing Research Institute, Doha, Qatar","computer-security/malicious-domain-detection, computer-security/internet-security, graph-processing","","","","","" "Huda Khayrallah, Brian Thompson, Kevin Duh, Philipp Koehn – Johns Hopkins University, USA","Regularized Training Objective for Continued Training for Domain Adaptation in Neural Machine Translation","https://www.aclweb.org/anthology/papers/W/W18/W18-2705/","papers","20180101Z00:00:00","","","Johns Hopkins University, USA","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Douwe Kiela, Changhan Wang, Kyunghyun Cho – Facebook AI Research, USA; New York University, USA; CIFAR Global Scholar, Canada","Dynamic meta-embeddings for improved sentence representations","https://www.aclweb.org/anthology/D18-1176","papers","20180101Z00:00:00","","While one of the first steps in many NLP systems is selecting what pre-trained word embeddings to use, we argue that such a step is better left for neural networks to figure out by themselves. To that end, we introduce dynamic meta-embeddings, a simple yet effective method for the supervised learning of embedding ensembles, which leads to state-of-the-art performance within the same model class on a variety of tasks. We subsequently show how the technique can be used to shed new light on the usage of word embeddings in NLP systems.","Facebook AI Research, USA; New York University, USA; CIFAR Global Scholar, Canada","nlp/sentence-embeddings, nlp/word-embeddings","","","","GloVe-word-embeddings, fastText-word-embeddings","" "Johannes Kiesel, Florian Kneist, Milad Alshomary, Benno Stein, Matthias Hagen, Martin Potthast – Paderborn University, Germany; Bauhaus-Universität Weimar, Germany; Martin-Luther-Universität Halle-Wittenberg, Germany; Leipzig University, Germany; Ulm University, Germany","Reproducible Web Corpora: Interactive Archiving with Automatic Quality Assessment","https://dl.acm.org/citation.cfm?id=3239574","papers","20180101Z00:00:00","","","Paderborn University, Germany; Bauhaus-Universität Weimar, Germany; Martin-Luther-Universität Halle-Wittenberg, Germany; Leipzig University, Germany; Ulm University, Germany","web-mining, nlp/web-as-corpus","To build a solid benchmark dataset for web reproduction quality assessment, we carefully sampledweb pages with the goal of representing a wide cross-section of the different types and genres of webpages found on the web. As a population of web pages to draw a sample from, we resort to the recentbillion-page Common Crawl 2017-04 [36]. From there, we primarily sampled pages from most ofthe well-known sites—as defined by the website’s Alexa traffic rank [1]⁶—to ensure that our sampleencompasses pages using the most recent web technologies and design standards. Moreover, pagesfrom a number of less well-known sites have been included. Altogether, the Webis Web Archive 17 comprises 10,000 web pages.","CC-MAIN-2017-04","","","" "Daesik Kim, Seonhoon Kim, Nojun Kwak – Seoul National University, South Korea; V.DO Inc., South Korea; Naver Corporation, South Korea","Textbook Question Answering with Knowledge Graph Understanding and Unsupervised Open-set Text Comprehension","https://arxiv.org/abs/1811.00232","papers","20180101Z00:00:00","","","Seoul National University, South Korea; V.DO Inc., South Korea; Naver Corporation, South Korea","nlp/question-answering, nlp/word-embeddings, nlp/knowledge-graph, nlp/text-comprehension","","","","GloVe","" "Shun Kiyono, Jun Suzuki, Kentaro Inui – Tohoku University, Japan; Center for Advanced Intelligence Project, Japan","Mixture of Expert/Imitator Networks: Scalable Semi-supervised Learning Framework","https://arxiv.org/abs/1810.05788","papers","20180101Z00:00:00","","","Tohoku University, Japan; Center for Advanced Intelligence Project, Japan","cc-cited-not-used, nlp/text-classification, ai/deep-learning, ai/neural-networks","","","","","" "Rebecca Knowles, Philipp Koehn – Johns Hopkins University, USA","Context and Copying in Neural Machine Translation","http://www.aclweb.org/anthology/D18-1339","papers","20180101Z00:00:00","","","Johns Hopkins University, USA","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Jacob Krantz, Jugal Kalita – Gonzaga University, USA; University of Colorado, USA","Abstractive Summarization Using Attentive Neural Techniques","https://arxiv.org/abs/1810.08838","papers","20180101Z00:00:00","","","Gonzaga University, USA; University of Colorado, USA","nlp/text-summarization, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Dmitry Kravchenko, Lidia Pivovarova – Ben-Gurion University of the Negev, Israel; University of Helsinki, Finland","DL Team at SemEval-2018 Task 1: Tweet Affect Detection using Sentiment Lexicons and Embeddings","http://www.aclweb.org/anthology/S18-1025","papers","20180101Z00:00:00","","","Ben-Gurion University of the Negev, Israel; University of Helsinki, Finland","nlp/sentiment-analysis","","","","GloVe-word-embeddings","" "Artur Kulmizev – University of Groningen, The Netherlands","Multilingual word embeddings and their utility in cross-lingual learning","http://hdl.handle.net/10810/29083","papers","20180101Z00:00:00","","","University of Groningen, The Netherlands","nlp/semantics, nlp/word-embeddings, cc-cited-not-used","","","","","" "Artur Kulmizev, Mostafa Abdou, Vinit Ravishankar, Malvina Nissim – University of Groningen, The Netherlands; Institute of Formal and Applied Linguistics Charles University in Prague, Czech Republic","Discriminator at SemEval-2018 Task 10: Minimally Supervised Discrimination","http://www.aclweb.org/anthology/S18-1167","papers","20180101Z00:00:00","","","University of Groningen, The Netherlands; Institute of Formal and Applied Linguistics Charles University in Prague, Czech Republic","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "José Lages, Dima L. Shepelyansky, Andrei Zinovyev – Université de Franche-Comté, Besançon, France","Inferring hidden causal relations between pathway members using reduced Google matrix of directed biological networks","https://doi.org/10.1371/journal.pone.0190812","papers","20180101Z00:00:00","","","Université de Franche-Comté, Besançon, France","cc-cited-not-used, graph-processing, web-science/hyperlinkgraph, network analysis, biochemistry, proteine structure","At present directed networks of real systems can be very large (about 4.2 millions for the English Wikipedia edition in 2013 [18] or 3.5 billion web pages for a publicly accessible web crawl that was gathered by the Common Crawl Foundation in 2012 [53: Meusel R, Vigna S, Lehmberg O, Bizer C. The graph structure in the web—analyzed on different aggregation levels. J. Web Sci. 2015;1:33.]).","","","","" "Oliver Lehmberg, Oktie Hassanzadeh – University of Mannheim, Germany; IBM Research, Yorktown Heights, New York, USA","Ontology Augmentation Through Matching with Web Tables","http://disi.unitn.it/~pavel/om2018/papers/om2018_LTpaper4.pdf","papers","20180101Z00:00:00","","","University of Mannheim, Germany; IBM Research, Yorktown Heights, New York, USA","semantic web, ontology extraction, web tables","We perform an empirical study of the performance of this approach in using Web Tables extracted from the Common Crawl to augment the properties in DBpedia ontology.","","","WDCWebTables","" "Tao Li, Lei Lin, Minsoo Choi, Kaiming Fu, Siyuan Gong, Jian Wang – Purdue University, Indiana, USA","Youtube av 50k: an annotated corpus for comments in autonomous vehicles","https://arxiv.org/abs/1807.11227","papers","20180101Z00:00:00","","","Purdue University, Indiana, USA","cc-cited-not-used, nlp/corpus-construction, nlp/opinion-mining, nlp/sentiment-analysis","","","","","" "Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency – Carnegie Mellon University","Multimodal Language Analysis with Recurrent Multistage Fusion: Supplementary Material","https://arxiv.org/abs/1808.03920","papers","20180101Z00:00:00","","","Carnegie Mellon University","nlp/multi-modality, nlp/language-model","We used 300 dimensional Glove word embeddings trained on 840 billion tokens from the common crawl dataset (Pennington et al., 2014).","","","GloVe-word-embeddings","" "Xiaojing Liao, Sumayah Alrwais, Kan Yuan, Luyi Xing, XiaoFeng Wang, Shuang Hao, Raheem Beyah – Indiana University Bloomington, USA; King Saud University, Saudi Arabia; University of Texas at Dallas, USA; Georgia Institute of Technology, USA","Cloud repository as a malicious service: challenge, identification and implication","https://cybersecurity.springeropen.com/articles/10.1186/s42400-018-0015-6","papers","20180101Z00:00:00","","","Indiana University Bloomington, USA; King Saud University, Saudi Arabia; University of Texas at Dallas, USA; Georgia Institute of Technology, USA","computer-security/malicious-hosting-service, computer-security/internet-security","[...], we developed BarFinder, a scanner that automatically detects Bars through inspecting the topological relations between websites and the cloud bucket they use, in an attempt to capture Bars based on the external features of the websites they serve. [...] Running the scanner over all the data collected by the Common Crawl (Crawl 2015), which indexed five billion web pages, for those associated with all major cloud storage providers (including Amazon S3, Cloudfront, Google Drive, etc.), we found around 1 million sites utilizing 6885 repositories hosted on these clouds. [...] We built the site list with the help of Common Crawl (Crawl 2015), a public big data project that crawls about 5 billion webpages each month through a large-scale Hadoop-based crawler and maintains lists of the crawled websites and their embedded links. Searching the Common Crawl (Crawl 2015) dataset, collected in February 2015, for the websites loading content from the 400 clean and malicious buckets identified above, we found 141,149 websites, were used by our crawler. [...] We further developed a tool in Python to recover cloud URLs from the web content gathered by Common Crawl.","CC-MAIN-2015-11","","","" "Dan Liu, Junhua Liu, Wu Guo, Shifu Xiong, Zhiqiang Ma, Rui Song, Chongliang Wu, Quan Liu – University of Science and Technology of China, China; IFLYTEK Co. LTD.","The USTC-NEL Speech Translation system at IWSLT 2018","https://arxiv.org/abs/1812.02455","papers","20180101Z00:00:00","","","University of Science and Technology of China, China; IFLYTEK Co. LTD.","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Bingbin Liu, Serena Yeung, Edward Chou, De-An Huang, Li Fei-Fei, Juan Carlos Niebles – Stanford University, USA; Google Cloud AI, Mountain View, USA","Temporal Modular Networks for Retrieving Complex Compositional Activities in Videos","http://openaccess.thecvf.com/content_ECCV_2018/html/Bingbin_Liu_Temporal_Modular_Networks_ECCV_2018_paper.html","papers","20180101Z00:00:00","","","Stanford University, USA; Google Cloud AI, Mountain View, USA","ai/computer-vision, ir/video-retrieval, ai/action-recognition, nlp/word-embeddings","","","","","" "Chi-kiu Lo, Michel Simard, Darlene Stewart, Samuel Larkin, Cyril Goutte, Patrick Littell – National Research Council, Canada","Accurate semantic textual similarity for cleaning noisy parallel corpora using semantic machine translation evaluation metric: The NRC supervised submissions to the Parallel Corpus Filtering task","http://www.aclweb.org/anthology/W18-6481","papers","20180101Z00:00:00","","","National Research Council, Canada","cc-cited-not-used, nlp/machine-translation, nlp/corpus-construction","","","","","" "Colin Lockard, Xin Luna Dong, Arash Einolghozati, Prashant Shiralkar – amazon.com","CERES: Distantly Supervised Relation Extraction from the Semi-Structured Web","https://arxiv.org/abs/1804.04635","papers","20180101Z00:00:00","","","amazon.com","ir/information-extraction, ir/relation-extraction","The CommonCrawl corpus consists of monthly snapshots of pages from millions of websites [1] on the Web. We started with a few well-known sites, including rottentomatoes.com, boxofficemojo.com, and themoviedb.org. Based on a Wikipedia list of the largest global film industries by admissions, box office, and number of productions⁸, we then issued Google searches for terms corresponding to these countries, such as “Nigerian film database” and recorded resulting sites that had detail pages related to movies. We also issued a few additional searches related to specific genres we thought may not be well-represented in mainstream sites, including “animated film database” and “documentary film database”. After compiling our list of sites, we then checked CommonCrawl⁹ and kept all sites with more than one hundred pages available. Our final list contains a broad mix of movie sites, including sites based around national film industries, genres, film music, and screen size. Most are in English, but the set also includes sites in Czech, Danish, Icelandic, Italian, Indonesian, and Slovak. ⁸https://en.wikipedia.org/wiki/Film_industry ⁹For each site, we scanned the CommonCrawl indices for all monthly scrapes prior to January 2018 and downloaded all pages for the site from the scrape with the largest number of unique webpages. Note that these scrapes do not necessarily obtain all pages present on a site, so the retrieved pages represent only a subset of the full site.","CC-MAIN-201[3-7]-*","","","" "Gaurav Maheshwari, Priyansh Trivedi, Denis Lukovnikov, Nilesh Chakraborty, Asja Fischer, Jens Lehmann – University of Bonn, Germany; Ruhr University, Bochum, Germany","Learning to Rank Query Graphs for Complex Question Answering over Knowledge Graphs","https://arxiv.org/abs/1811.01118","papers","20180101Z00:00:00","","","University of Bonn, Germany; Ruhr University, Bochum, Germany","information retrieval, nlp/question-answering, nlp/knowledge-graph, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Jose L. Martinez-Rodriguez, Aidan Hogan, Ivan Lopez-Arevalo – Cinvestav Tamaulipas, Ciudad Victoria, Mexico; University of Chile, Chile","Information extraction meets the Semantic Web: A survey","https://content.iospress.com/articles/semantic-web/sw180333","papers","20180101Z00:00:00","","","Cinvestav Tamaulipas, Ciudad Victoria, Mexico; University of Chile, Chile","cc-cited-not-used, semantic web, linked data, information extraction","","","","","" "Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher – Salesforce Research","The natural language decathlon: Multitask learning as question answering","https://arxiv.org/abs/1806.08730","papers","20180101Z00:00:00","","","Salesforce Research","nlp/question-answering, nlp/machine-translation, nlp/text-summarization, nlp/sentiment-analysis, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Bryan McCann, Caiming Xiong, Richard Socher – Salesforce.com, Inc.","Natural language processing using context-specific word vectors","https://patents.google.com/patent/US20180373682A1/en","papers","20180101Z00:00:00","","","Salesforce.com, Inc.","nlp/word-embeddings, patent","","","","","GloVe-word-embeddings" "Bryan McCann, Caiming Xiong, Richard Socher – Salesforce.com, Inc.","Natural language processing using a neural network","https://patents.google.com/patent/US20180349359A1/en","papers","20180101Z00:00:00","","","Salesforce.com, Inc.","nlp/word-embeddings, patent","","","","","GloVe-word-embeddings" "Evert Meijers, Antoine Peris – Delft University of Technology, The Netherlands","Using toponym co-occurrences to measure relationships between places: review, application and evaluation","https://www.tandfonline.com/doi/abs/10.1080/12265934.2018.1497526","papers","20180101Z00:00:00","","","Delft University of Technology, The Netherlands","nlp, coocurrences, toponymy, urban system, place name disambiguation, semantic relatedness","We innovate by exploiting a so far unparalleled amount of data, namely the billions of web pages contained in the commoncrawl web archive, and by applying the method also to small places that tend to be ignored by other methods. [...] we use the March 2017 data. The Common Crawl data comes in three formats, of which the WET format is most useful for the co-occurrence method as it only contains extracted plain text.","","","","" "Hardik Meisheri, Lipika Dey – TCS Research, New Delhi, India","TCS Research at SemEval-2018 Task 1: Learning Robust Representations using Multi-Attention Architecture","http://www.aclweb.org/anthology/S18-1043","papers","20180101Z00:00:00","","","TCS Research, New Delhi, India","nlp/sentiment-analysis","","","","GloVe-word-embeddings","" "Todor Mihaylov, Peter Clark, Tushar Khot, Ashish Sabharwal – Allen Institute for Artificial Intelligence, Seattle, USA; Heidelberg University, Germany","Can a suit of armor conduct electricity? a new dataset for open book question answering","https://www.aclweb.org/anthology/D18-1260","papers","20180101Z00:00:00","","We present a new kind of question answering dataset, OpenBookQA, modeled after open book exams for assessing human understanding of a subject. The open book that comes with our questions is a set of 1326 elementary level science facts. Roughly 6000 questions probe an understanding of these facts and their application to novel situations. This requires combining an open book fact (e.g., metals conduct electricity) with broad common knowledge (e.g., a suit of armor is made of metal) obtained from other sources. While existing QA datasets over documents or knowledge bases, being generally self-contained, focus on linguistic understanding, OpenBookQA probes a deeper understanding of both the topic{---}in the context of common knowledge{---}and the language it is expressed in. Human performance on OpenBookQA is close to 92{\%}, but many state-of-the-art pre-trained QA methods perform surprisingly poorly, worse than several simple neural baselines we develop. Our oracle experiments designed to circumvent the knowledge retrieval bottleneck demonstrate the value of both the open book and additional facts. We leave it as a challenge to solve the retrieval problem in this multi-hop setting and to close the large gap to human performance.","Allen Institute for Artificial Intelligence, Seattle, USA; Heidelberg University, Germany","nlp/question-answering, nlp/word-embeddings, nlp/corpus-construction","For all experiments we used= 300GloVe(Penningtonet al., 2014) embeddings pre-trained on 840B tokens fromCommon Crawl(https://nlp.stanford.edu/projects/glove/).","","","GloVe-word-embeddings","" "Sewon Min, Victor Zhong, Richard Socher, Caiming Xiong – Seoul National University, South Korea; Salesforce Research","Efficient and Robust Question Answering from Minimal Context over Documents","https://arxiv.org/abs/1805.08092","papers","20180101Z00:00:00","","","Seoul National University, South Korea; Salesforce Research","nlp/question-answering, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Bahman Mirheidari, Daniel Blackburn, Traci Walker, Annalena Venneri, Markus Reuber, Heidi Christensen – University of Sheffield, United Kingdom; Royal Hallamshire Hospital, United Kingdom","Detecting signs of dementia using word vector representations","https://www.isca-speech.org/archive/Interspeech_2018/pdfs/1764.pdf","papers","20180101Z00:00:00","","","University of Sheffield, United Kingdom; Royal Hallamshire Hospital, United Kingdom","nlp/word-embeddings, nlp/speech-recognition, nlp/clinical-application, dementia detection","","","","GloVe-word-embeddings","" "Alistair Moffat, Matthias Petri – University of Melbourne, Australia","Index compression using byte-aligned ANS coding and two-dimensional contexts","https://dl.acm.org/citation.cfm?id=3159663","papers","20180101Z00:00:00","","We examine approaches used for block-based inverted index compression, such as the OptPFOR mechanism, in which fixed-length blocks of postings data are compressed independently of each other. Building on previous work in which asymmetric numeral systems (ANS) entropy coding is used to represent each block, we explore a number of enhancements: (i) the use of two-dimensional conditioning contexts, with two aggregate parameters used in each block to categorize the distribution of symbol values that underlies the ANS approach, rather than just one; (ii) the use of a byte-friendly strategic mapping from symbols to ANS codeword buckets; and (iii) the use of a context merging process to combine similar probability distributions. Collectively, these improvements yield superior compression for index data, outperforming the reference point set by the Interp mechanism, and hence representing a significant step forward. We describe experiments using the 426 GiB gov2 collection and a new large collection of publicly-available news articles to demonstrate that claim, and provide query evaluation throughput rates compared to other block-based mechanisms.","University of Melbourne, Australia","information-retrieval/search-engine, information-retrieval/inverted-index","The second pair of test files are derived from publicly available web-sourced news articles² [²http://commoncrawl.org/2016/10/news-dataset-available/], taking English language news sources (as identified by Apache Tika) from 01/09/2016 up until and including 28/02/2017, that is, a six month crawl period that contains 7,508,082 documents.","CC-NEWS","","","" "Nkwebi Motlogelwa, Edwin Thuma, Tebo Leburu-Dingalo – University of Botswana, Botswana","Merging search results generated by multiple query variants using data fusion","http://ceur-ws.org/Vol-2125/paper_194.pdf","papers","20180101Z00:00:00","","","University of Botswana, Botswana","ir/multilingual-information-retrieval, ir/biomedical-information-extraction, ir/query-expansion","","","","CLEF-eHealth-2018-IR-task","" "Mathieu Nassif, Christoph Treude, Martin Robillard – McGill University School of Computer Science, Montreal, Quebec, Canada","Automatically Categorizing Software Technologies","https://ieeexplore.ieee.org/abstract/document/8359344","papers","20180101Z00:00:00","","Informal language and the absence of a standard taxonomy for software technologies make it difficult to reliably analyze technology trends on discussion forums and other on-line venues. We propose an automated approach called Witt for the categorization of software technology (an expanded version of the hypernym discovery problem). Witt takes as input a phrase describing a software technology or concept and returns a general category that describes it (e.g., integrated development environment), along with attributes that further qualify it (commercial, php, etc.). By extension, the approach enables the dynamic creation of lists of all technologies of a given type (e.g., web application frameworks). Our approach relies on Stack Overflow and Wikipedia, and involves numerous original domain adaptations and a new solution to the problem of normalizing automatically-detected hypernyms. We compared Witt with six independent taxonomy tools and found that, when applied to software terms, Witt demonstrated better coverage than all evaluated alternate solutions, without a corresponding degradation in false positive rate.","McGill University School of Computer Science, Montreal, Quebec, Canada","nlp/semantics, ontology extraction, ir/information-extraction","All these approaches work by mining large text corpora. Among the latest such techniques is the WebIsA Database [32] from the Web Data Commons project, which extracts hypernyms from CommonCrawl,¹ a corpusof over 2.1 billion web pages. In contrast to these previous works, our method onlyrequires Stack Overflow tag information data and targeted Wikipedia searches. It creates a structure that links a single term to an attributed category that describes the term.","","","","WDC-WebIsADb" "Rosa Navarrete, Sergio Luján Mora – Universidad de Alicante, Spain","A Quantitative Analysis of the Use of Microdata for Semantic Annotations on Educational Resources","http://rua.ua.es/dspace/handle/10045/73711","papers","20180101Z00:00:00","","","Universidad de Alicante, Spain","semantic web, structured data, microdata","This quantitative analysis was conducted on datasets extracted from the Common Crawl Corpus [17], as it is the largest corpus of web crawl. The datasets containing structured data were extracted by the Web Data Commons (WDC) project [18] and are available for public use. Two datasets were considered: the first, from December 2014, with 2.01 billion pages, of which 620 million pages correspond to structured data; and the second, from November 2015, with 1.77 billion pages, of which 541 million pages correspond to structured data.","","","","WebDataCommons" "Matteo Negri, Marco Turchi, Rajen Chatterjee, Nicola Bertoldi – Fondazione Bruno Kessler, Trento, Italy; University of Trento, Italy","eSCAPE: a Large-scale Synthetic Corpus for Automatic Post-Editing","https://arxiv.org/abs/1803.07274","papers","20180101Z00:00:00","","","Fondazione Bruno Kessler, Trento, Italy; University of Trento, Italy","nlp/machine-translation","A widely used resource, described in (Junczys-Dowmunt and Grundkiewicz, 2016), was included in the training set of the winning (and almost all) submissions to the last two English–German rounds of the APE task at WMT (IT domain). It consists of 4.3 million instances created by first filtering a subset of IT-related sentences from the German Common Crawl corpus⁶, and then by using two English–German and German–English PBMT systems trained on in-domain IT corpora for a round-trip translation of the selected sentences (De → En → De).","","","WMT-13-translation-task-common-crawl-corpus","" "Dávid Márk Nemeskey, András Kornai – HAS Institute of Computer Science, Budapest, Hungary","Emergency vocabulary","https://link.springer.com/article/10.1007%2Fs10796-018-9843-x","papers","20180101Z00:00:00","","","HAS Institute of Computer Science, Budapest, Hungary","nlp/vocabulary-extraction, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Phuc Nguyen, Khai Nguyen, Ryutaro Ichise, Hideaki Takeda – SOKENDAI (The Graduate University for Advanced Studies) Shonan Village, Hayama, Kanagawa, Japan; National Institute of Informatics, Tokyo, Japan","EmbNum: Semantic labeling for numerical values with deep metric learning","https://arxiv.org/abs/1807.01367","papers","20180101Z00:00:00","","","SOKENDAI (The Graduate University for Advanced Studies) Shonan Village, Hayama, Kanagawa, Japan; National Institute of Informatics, Tokyo, Japan","","In a study of Lehmberg et al., 233 million tables were extracted from the July 2015 version of the Common Crawl [...]","","","","WDCWebTables" "Xing Niu, Michael Denkowski, Marine Carpuat – University of Maryland; Amazon.com, Inc.","Bi-Directional Neural Machine Translation with Synthetic Parallel Data","https://arxiv.org/pdf/1805.11213.pdf","papers","20180101Z00:00:00","","","University of Maryland; Amazon.com, Inc.","nlp/machine-translation","","","","","" "Takuya Ohshima, Motomichi Toyama – Keio University, Yokohama, Kanagawa, Japan","SDC: structured data collection by yourself","https://dl.acm.org/citation.cfm?id=3200849","papers","20180101Z00:00:00","","","Keio University, Yokohama, Kanagawa, Japan","web-crawling, semantic web, structured data","","","","","WebDataCommons" "Myle Ott, Michael Auli, David Granger, Marc'Aurelio Ranzato – Facebook AI Research, USA","Analyzing uncertainty in neural machine translation","https://arxiv.org/abs/1803.00047","papers","20180101Z00:00:00","","","Facebook AI Research, USA","cc-cited-not-used, nlp/machine-translation","","","","","" "Abel L. Peirson Peirson, E. Meltem Tolunay – Stanford University, USA","Dank Learning: Generating Memes Using Deep Neural Networks","https://arxiv.org/abs/1806.04510","papers","20180101Z00:00:00","","","Stanford University, USA","nlp/text-generation, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Christian S. Perone, Roberto Silveira, Thomas S. Paula – Universitat Politècnica de Catalunya, Barcelona, Spain","Evaluation of sentence embeddings in downstream and linguistic probing tasks","https://arxiv.org/abs/1806.06259","papers","20180101Z00:00:00","","","Universitat Politècnica de Catalunya, Barcelona, Spain","nlp/word-embeddings, nlp/sentence-embeddings","","","","fasttext-word-embeddings, GloVe-word-embeddings","" "Matthias Petri, Alistair Moffat – University of Melbourne, Australia","Compact inverted index storage using general-purpose compression libraries","http://dx.doi.org/10.1002/spe.2556","papers","20180101Z00:00:00","index compression, inverted index, web search","Efficient storage of large inverted indexes is one of the key technologies that support current web search services. Here we re-examine mechanisms for representing document-level inverted indexes and within-document term frequencies, including comparing specialized methods developed for this task against recent fast implementations of general-purpose adaptive compression techniques. Experiments with the Gov2-URL collection and a large collection of crawled news stories show that standard compression libraries can provide compression effectiveness as good as or better than previous methods, with decoding rates only moderately slower than reference implementations of those tailored approaches. This surprising outcome means that high-performance index compression can be achieved without requiring the use of specialized implementations.","University of Melbourne, Australia","information-retrieval/search-engine, information-retrieval/inverted-index","We also develop (and make freely available) a new IR test collection based on the News sub-collection of the Common Crawl∗∗. The News sub-collection provides daily crawls of news websites in many languages. We refer to this collection as CC-NEWS-URL. We provide all scripts to download the freely available source WARC files from Amazon AWS and process them using Apache Tika and Apache Lucene in a consistent manner. The resulting consistency enables researchers to perform experiments on exactly the collection in their experiments, and improves comparability of results between different rounds of experimentation. For example, the number of terms reported for the GOV2-URL collection ranges from 18 million up to 48 million, preventing fair and direct comparison between results reported in different papers. The number of WARC files in CC-NEWS-URL increases each day, and hence we specify the collection using: (1) a date range; and (2) a language filter. For example, in this work, we utilize the CC-NEWS-20160901-2017028-EN collection which uses all English language news sources (as identified by Apache Tika) from 01/09/2016 up until and including 28/02/2017, that is, a six month crawl period that contains 7,508,082 documents, 26,240,031 unique terms and 4,457,492,131 postings. Currently the CC-NEWS-URL collection grows by roughly 50,000 English documents per day. This exact parsing can be reproduced by the scripts provided at https://github.com/mpetri/rlz-invidx and https://github.com/mpetri/TikaLuceneWarc, with raw postings lists stored in the popular “ds2i” format††. Document identifiers are again reassigned in URL order. We also explored a date-ordered collection based on the same source data, and obtained – method-for-method – uniformly weaker compression outcomes than for URL-sorted, in part because many of the URLs contain dates encoded in them anyway.","CC-NEWS","","","" "Mohammad Taher Pilehvar, Dimitri Kartsaklis, Victor Prokhorov, Nigel Collier – University of Cambridge, United Kingdom","Card-660: Cambridge Rare Word Dataset-a Reliable Benchmark for Infrequent Word Representation Models","https://arxiv.org/abs/1808.09308","papers","20180101Z00:00:00","","","University of Cambridge, United Kingdom","linguistics, nlp/semantics, nlp/word-embeddings, lexicography","","","","GloVe-word-embeddings","" "Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, Alan W Black – Carnegie Mellon University, Pittsburgh, PA, USA","Style Transfer Through Back-Translation","https://arxiv.org/abs/1804.09000","papers","20180101Z00:00:00","","","Carnegie Mellon University, Pittsburgh, PA, USA","nlp/machine-translation","","","","WMT-13-translation-task-common-crawl-corpus","" "Roy Raanani, Russell Levy, Micha Yochanan Beakstone, Dominik Facher – Affectlayer Inc","Analyzing conversations to automatically identify product feature requests","https://patents.google.com/patent/US20180183930A1/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Micha Yochanan Breadstone – Affectlayer Inc","Automatic generation of playlists from conversations","https://patents.google.com/patent/US20180046710A1/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Micha Yochanan Breakstone – Affectlayer Inc","Coordinating voice calls between representatives and customers to influence an outcome of the call","https://patents.google.com/patent/US9900436B2/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Micha Yochanan Breakstone – Affectlayer Inc","Modeling voice calls to improve an outcome of a call between a representative and a customer","https://patents.google.com/patent/US20180309873A1/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and study world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Micha Yochanan Breakstone, Dominik Facher – Affectlayer Inc","Analyzing conversations to automatically identify action items","https://patents.google.com/patent/US20180122383A1/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Micha Yochanan Breakstone, Dominik Facher – Affectlayer Inc","Analyzing conversations to automatically identify customer pain points","https://patents.google.com/patent/US20180181561A1/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Micha Yochanan Breakstone, Dominik Facher – Affectlayer Inc","Analyzing conversations to automatically identify product features that resonate with customers","https://patents.google.com/patent/US20180183930A1/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Dominik Facher, Micha Yochanan Breakstone – Affectlayer Inc","Automatic pattern recognition in conversations","http://www.freepatentsonline.com/10110743.html","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Roy Raanani, Russell Levy, Dominik Facher, Micha Yochanan Breakstone – Affectlayer Inc","Analyzing conversations to automatically identify deals at risk","https://patents.google.com/patent/US10133999B2/en","papers","20180101Z00:00:00","","","Affectlayer Inc","nlp/text-corpora, cc-cited-not-used, patent","At the same time, natural language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the availability of large, freely accessible natural language corpora (e.g., CommonCrawl), ...","","","","" "Jonathan Raiman, John Miller – Baidu USA LLC","Global normalized reader systems and methods","https://patents.google.com/patent/US20180300312A1/en","papers","20180101Z00:00:00","","","Baidu USA LLC","nlp/question-answering, nlp/word-embeddings, patent","In embodiments, the 300 dimensional 8.4B token Common Crawl GloVe vectors were used. Words missing from the Common Crawl vocabulary were set to zero.","","","GloVe-word-embeddings","" "Martin Raison, Pierre-Emmanuel Mazaré, Rajarshi Das, Antoine Bordes – Facebook AI Research, Paris, France; University of Massachusetts, Amherst, USA","Weaver: Deep Co-Encoding of Questions and Documents for Machine Reading","https://arxiv.org/abs/1804.10490","papers","20180101Z00:00:00","","","Facebook AI Research, Paris, France; University of Massachusetts, Amherst, USA","nlp/question-answering, nlp/word-embeddings, information retrieval","","","","fastText-word-embeddings","" "Petar Ristoski, Petar Petrovski, Peter Mika, Heiko Paulheim – University of Mannheim, Germany; Yahoo Labs, London, United Kingdom","A machine learning approach for product matching and categorization","https://content.iospress.com/articles/semantic-web/sw300","papers","20180101Z00:00:00","","","University of Mannheim, Germany; Yahoo Labs, London, United Kingdom","semantic web, information extraction, microdata, linked data, data integration","","","","WDC-triples","" "Alexey Romanov, Chaitanya Shivade – University of Massachusetts Lowell, USA; IBM Almaden Research Center, San Jose, CA, USA","Lessons from Natural Language Inference in the Clinical Domain","https://arxiv.org/abs/1808.06752","papers","20180101Z00:00:00","","","University of Massachusetts Lowell, USA; IBM Almaden Research Center, San Jose, CA, USA","nlp, natural language inference","","","","GloVe-word-embeddings, fastText-word-embeddings","" "Amir Rosenfeld, Shimon Ullman – Weizmann Institute of Science, Rehovot, Israel","Action Classification via Concepts and Attributes","https://ieeexplore.ieee.org/abstract/document/8546184","papers","20180101Z00:00:00","","","Weizmann Institute of Science, Rehovot, Israel","nlp/word-embeddings, ai/computer-vision, image-classification","","","","GloVe-word-embeddings","" "Nick Rossenbach, Jan Rosendahl, Yunsu Kim, Miguel Graça, Aman Gokrani, Hermann Ney – RWTH Aachen University, Germany","The RWTH Aachen University filtering system for the WMT 2018 parallel corpus filtering task","https://www.aclweb.org/anthology/W18-6487","papers","20180101Z00:00:00","","","RWTH Aachen University, Germany","nlp/machine-translation, nlp/corpus-construction","","","","WMT-16-translation-task-common-crawl-corpus","" "Dwaipayan Roy, Debasis Ganguly, Sumit Bhatia, Srikanta Bedathur, Mandar Mitra – Indian Statistical Institute, Kolkata, India; IBM Research, Dublin, Ireland, Dublin, Ireland; IBM Research, Delhi, India, Delhi, India; Indian Institute of Technology, Delhi, Delhi, India","Using Word Embeddings for Information Retrieval: How Collection and Term Normalization Choices Affect Performance","https://dl.acm.org/citation.cfm?id=3269277","papers","20180101Z00:00:00","","","Indian Statistical Institute, Kolkata, India; IBM Research, Dublin, Ireland, Dublin, Ireland; IBM Research, Delhi, India, Delhi, India; Indian Institute of Technology, Delhi, Delhi, India","cc-cited-not-used, nlp/word-embeddings, information-retrieval/term-normalization","In future, we plan to solidify these observations [...] as well asexperiment using large datasets (e.g. Common Crawl).","","","","" "Ethan M. Rudd, Richard Harang, Joshua Saxe – Sophos Group PLC, VA, USA","MEADE: Towards a Malicious Email Attachment Detection Engine","https://arxiv.org/abs/1804.08162","papers","20180101Z00:00:00","","alicious email attachments are a growing delivery vector for malware. While machine learning has been successfully applied to portable executable (PE) malware detection, we ask, can we extend similar ap- proaches to detect malware across heterogeneous file types commonly found in email attachments? In this paper, we explore the feasibility of applying machine learning as a static countermeasure to detect several types of malicious email attachments including Microsoft Office documents and Zip archives. To this end, we collected a dataset of over 5 million malicious/benign Microsoft Office documents from VirusTotal for evaluation as well as a dataset of benign Microsoft Office documents from the Common Crawl corpus, which we use to provide more realistic estimates of thresholds for false positive rates on in-the-wild data. We also collected a dataset of approximately 500k malicious/benign Zip archives, which we scraped using the VirusTotal service, on which we performed a separate evaluation. We analyze predictive performance of several classifiers on each of the VirusTotal datasets using a 70/30 train/test split on first seen time, evaluating feature and classifier types that have been applied successfully in commercial antimalware products and R&D contexts. Using deep neural networks and gradient boosted decision trees, we are able to obtain ROC curves with >0.99 AUC on both Microsoft Office document and Zip archive datasets. Discussion of deployment viability in various antimalware contexts is provided.","Sophos Group PLC, VA, USA","web-science, computer-security/email-security","","","","","" "Maciej Rybinski, William Miller, Javier Del Ser, Miren Nekane Bilbao, José F. Aldana-Montes – University of Málaga, Spain; Anami Precision, San Sebastián, Spain; TECNALIA, Bizkaia, Spain; Basque Center for Applied Mathematics (BCAM), Bizkaia, Spain; University of the Basque Country (UPV/EHU), Bilbao, Spain","On the Design and Tuning of Machine Learning Models for Language Toxicity Classification in Online Platforms","https://link.springer.com/chapter/10.1007/978-3-319-99626-4_29","papers","20180101Z00:00:00","","","University of Málaga, Spain; Anami Precision, San Sebastián, Spain; TECNALIA, Bizkaia, Spain; Basque Center for Applied Mathematics (BCAM), Bizkaia, Spain; University of the Basque Country (UPV/EHU), Bilbao, Spain","nlp/text-classification, nlp/sentiment-analysis, nlp/word-embeddings, ai/deep-learning","","","","GloVe-word-embeddings","" "Shadi Saleh, Pavel Pecina – Charles University, Czech Republic","CUNI team: CLEF eHealth Consumer Health Search Task 2018","http://ceur-ws.org/Vol-2125/paper_201.pdf","papers","20180101Z00:00:00","","","Charles University, Czech Republic","ir/multilingual-information-retrieval, ir/biomedical-information-extraction, nlp/machine-translation","Document collection in the CLEF 2018 consumer health search task is created using CommonCrawl platform¹. First, the query set (described in Section 2.2) is submitted to Microsoft Bing APIs, and a list of domains is extracted from the top retrieved results. This list is extended by adding reliable health websites, at the end clefehealth2018_B (which we use in this work) contained 1,653 sites, after excluding non-medical websites such as news websites. After preparing the domain list, these domains are crawled and provided as an indexed collection to the participants.","","","CLEF-eHealth-2018-IR-task","" "Enrico Santus, Chris Biemann, Emmanuele Chersoni – Massachussetts Institute of Technology, USA; Universität Hamburg, Germany; Aix-Marseille University, France","BomJi at SemEval-2018 Task 10: Combining Vector-, Pattern-and Graph-based Information to Identify Discriminative Attributes","https://arxiv.org/abs/1804.11251","papers","20180101Z00:00:00","","","Massachussetts Institute of Technology, USA; Universität Hamburg, Germany; Aix-Marseille University, France","nlp/semantics","Thirteen features related to word and word-feature frequency were calculated on the basis of the information extracted from a corpus of 3.2B words, corresponding to about 20\% of the Common Crawl.","??","","GloVe-word-embeddings","" "Prathusha Kameswara Sarma – University of Wisconsin-Madison","Learning Word Embeddings for Data Sparse and Sentiment Rich Data Sets","http://www.aclweb.org/anthology/N18-4007","papers","20180101Z00:00:00","","","University of Wisconsin-Madison","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Prathusha K Sarma, YIngyu Liang, William A Sethares – University of Wisconsin-Madison","Domain Adapted Word Embeddings for Improved Sentiment Classification","https://arxiv.org/abs/1805.04576","papers","20180101Z00:00:00","","","University of Wisconsin-Madison","nlp/sentiment-analysis, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Prathusha K Sarma, William Sethares – University of Wisconsin-Madison","Simple Algorithms For Sentiment Analysis On Sentiment Rich, Data Poor Domains.","http://www.aclweb.org/anthology/C18-1290","papers","20180101Z00:00:00","","","University of Wisconsin-Madison","nlp/sentiment-analysis","","","","","GloVe-word-embeddings" "Shigehiko Schamoni, Julian Hitschler, Stefan Riezler – Heidelberg University, Germany","A dataset and reranking method for multimodal MT of user-generated image captions","https://amtaweb.org/wp-content/uploads/2018/03/AMTA_2018_Proceedings_Research_Track.pdf#page=146","papers","20180101Z00:00:00","","","Heidelberg University, Germany","nlp/machine-translation","","","","WMT-13-translation-task-common-crawl-corpus","" "Julian Schamper, Jan Rosendahl, Parnia Bahar, Yunsu Kim, Arne Nix, Hermann Ney – RWTH Aachen University, Germany","The RWTH Aachen University supervised machine translation systems for WMT 2018","https://www.aclweb.org/anthology/W18-6426","papers","20180101Z00:00:00","","","RWTH Aachen University, Germany","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Sebastian Schelter, Jérôme Kunegis – Technical University Berlin, Germany; University of Namur, Belgium","On the Ubiquity of Web Tracking: Insights from a Billion-Page Web Crawl","http://dx.doi.org/10.1561/106.00000014","papers","20180101Z00:00:00","","","Technical University Berlin, Germany; University of Namur, Belgium","web-science/tracking","","","tracking-the-trackers","","" "Holger Schwenk – Facebook AI Research","Filtering and Mining Parallel Data in a Joint Multilingual Space","http://arxiv.org/abs/1805.09822","papers","20180101Z00:00:00","","","Facebook AI Research","nlp/machine-translation","","","","WMT-13-translation-task-common-crawl-corpus","" "Jurica Ševa, Mario Sänger, Ulf Leser – Humboldt-Universität zu Berlin, Germany","WBI at CLEF eHealth 2018 Task 1: Language-independent ICD-10 coding using multi-lingual embeddings and recurrent neural networks","http://ceur-ws.org/Vol-2125/paper_118.pdf","papers","20180101Z00:00:00","","","Humboldt-Universität zu Berlin, Germany","ir/multilingual-information-retrieval, ir/biomedical-information-extraction, nlp/machine-translation, nlp/word-embeddings","","","","CLEF-eHealth-2018-IR-task","" "Cory Shain, Richard Futrell, Marten van Schijndel, Edward Gibson, William Schuler – Ohio State University; MIT; Johns Hopkins University","Evidence of semantic processing difficulty in naturalistic reading","https://vansky.github.io/assets/pdf/shain_etal-2018-cuny.pdf","papers","20180101Z00:00:00","","","Ohio State University; MIT; Johns Hopkins University","nlp, psycholinguistics","[...] using GloVe vectors [20] pretrained on the 840B word Common Crawl dataset [...]","","","GloVe-word-embeddings","" "Gabi Shalev, Yossi Adi, Joseph Keshet – Bar-Ilan University, Israel","Out-of-distribution detection using multiple semantic label representations","http://papers.nips.cc/paper/7967-out-of-distribution-detection-using-multiple-semantic-label-representations","papers","20180101Z00:00:00","","","Bar-Ilan University, Israel","nlp/semantics, nlp/word-embeddings, ai/neural-networks, ai/computer-vision, nlp/speech-recognition","","","","GloVe-word-embeddings","" "Sistla Sai Shravani, Niraj Kumar Jha, Rajlaksmi Guha – IT Kharagpur, India","A Machine Learning Approach to Correlate Emotional Intelligence and Happiness Based on Twitter Data","http://hci2018.bcs.org/prelim_proceedings/papers/Work-in-Progress%20Track/BHCI-2018_paper_115.pdf","papers","20180101Z00:00:00","","","IT Kharagpur, India","nlp/sentiment-analysis, nlp/word-embeddings","","","","fastText-word-embeddings","" "Umutcan Şimşek, Dieter Fensel – University of Innsbruck, Austria","Intent Generation for Goal-Oriented Dialogue Systems based on Schema.org Annotations","https://arxiv.org/abs/1807.01292","papers","20180101Z00:00:00","","","University of Innsbruck, Austria","nlp/dialogue-systems, semantic web, microformats","","","","GloVe-word-embeddings","" "Ravinder Singh, Marina Levina, Nelson Jiao, Asha Saini – DELL EMC","Using open data to predict market movements","https://education.emc.com/content/dam/dell-emc/documents/en-us/2017KS_Ravinder-Using_Open_Data_to_Predict_Market_Movements.pdf","papers","20180101Z00:00:00","","","DELL EMC","market research, nlp, information retrieval","We found that The Register articles for specific vendors extracted from the common crawl data set are highly correlated with our reading of General Purpose Magic Quadrant position movements in time. [...] The Figure 11 : Common Crawl Data Processing Flow Diagram shows a broad overview of the steps involved in the analysis of common crawl data. Going from the bottom up it shows how the data is extracted, processed and visualized. The amount of data in each phase becomes more streamlined and, hence, the reduction in size of the data being worked on. We start with the crawl data, extract the pages of interest int o a private storage bucket, and then process it to remove unwanted words/tags. At the end, visualization tools are used to graphically display the results. These can be used to publish standard reports or customized by users to support their own analysis.","","","","" "Peter Andrew Miller Smith, Samuel Leeman-Munk, Angi Shelton, Bradford W Mott, Eric Wiebe, James Lester – North Carolina State University, Raleigh, NC, USA; SAS Institute Inc., Cary, NC, USA","A multimodal assessment framework for integrating student writing and drawing in elementary science learning","https://ieeexplore.ieee.org/abstract/document/8274912/","papers","20180101Z00:00:00","","","North Carolina State University, Raleigh, NC, USA; SAS Institute Inc., Cary, NC, USA","nlp/word-embeddings, nlp/semantics, education, tutoring systems, student writing","","","","","" "Luca Soldaini – Georgetown University, USA","The Knowledge and Language Gap in Medical Information Seeking","https://search.proquest.com/openview/e669cd1478b33d52fa4cc71e8393c639/1","papers","20180101Z00:00:00","","","Georgetown University, USA","ir/multilingual-information-retrieval, ir/biomedical-information-retrieval","","","","","CLEF-eHealth-2018-IR-task" "Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, Daniel Gildea – University of Rochester, Rochester, NY, USA; IBM T.J. Watson Research Center, Yorktown Heights, NY, USA; School of Engineering, Westlake University, China","Exploring graph-structured passage representation for multi-hop reading comprehension with graph neural networks","https://arxiv.org/abs/1809.02040","papers","20180101Z00:00:00","","","University of Rochester, Rochester, NY, USA; IBM T.J. Watson Research Center, Yorktown Heights, NY, USA; School of Engineering, Westlake University, China","nlp/word-embeddings, nlp/machine-reading, nlp/coreference-resolution, nlp/question-answering","","","","GloVe-word-embeddings","" "Samuel Spaulding, Huili Chen, Safinah Ali, Michael Kulinski, Cynthia Breazeal – Massachusetts Institute of Technology, Cambridge, MA, USA","A social robot system for modeling children's word pronunciation: socially interactive agents track","https://dl.acm.org/citation.cfm?id=3237946","papers","20180101Z00:00:00","","","Massachusetts Institute of Technology, Cambridge, MA, USA","computer-vision, nlp/word-embeddings","","","","","GloVe-word-embeddings" "Christian Stab, Johannes Daxenberger, Chris Stahlhut, Tristan Miller, Benjamin Schiller, Christopher Tauchmann, Steffen Eger, Iryna Gurevych – Ubiquitous Knowledge Processing Lab, Department of Computer Science, Technische Universität Darmstadt, Germany","ArgumenText: Searching for Arguments in Heterogeneous Sources","http://www.aclweb.org/anthology/N18-5005","papers","20180101Z00:00:00","","","Ubiquitous Knowledge Processing Lab, Department of Computer Science, Technische Universität Darmstadt, Germany","nlp/argument-mining","we build upon the English part of CommonCrawl, [...] we followed Habernal et al. (2016) for de-duplication, boiler-plate removal using jusText (Pomikálek, 2011), andlanguage detection.² This left us with 400 million heterogeneous plain-text documents in English, with an overall size of 683 GiB.","","","","" "Felix Stahlberg, Adria de Gispert, Bill Byrne – University of Cambridge, United Kingdom; SDL Research, Cambridge, United Kingdom","The University of Cambridge's Machine Translation Systems for WMT18","https://arxiv.org/abs/1808.09465","papers","20180101Z00:00:00","","","University of Cambridge, United Kingdom; SDL Research, Cambridge, United Kingdom","nlp/machine-translation","","","","WMT-13-translation-task-common-crawl-corpus","" "Chris Stahlhut – Ubiquitous Knowledge Processing Lab TU Darmstadt, Germany","Searching Arguments in German with ArgumenText","http://ceur-ws.org/Vol-2167/short7.pdf","papers","20180101Z00:00:00","","","Ubiquitous Knowledge Processing Lab TU Darmstadt, Germany","nlp/argument-mining","","","","","" "Stergios Stergiou, Dipen Rughwani, Kostas Tsioutsiouliklis – Yahoo Research, Sunnyvale, CA, USA; Google & Yahoo Research, Mountain View, CA, USA","Shortcutting Label Propagation for Distributed Connected Components","https://dl.acm.org/citation.cfm?id=3159696","papers","20180101Z00:00:00","","","Yahoo Research, Sunnyvale, CA, USA; Google & Yahoo Research, Mountain View, CA, USA","graph processing","","","","","" "Hanna Suominen, Liadh Kelly, Lorraine Goeuriot, Aurélie Névéol, Lionel Ramadier, Aude Robert, Evangelos Kanoulas, Rene Spijker, Leif Azzopardi, Dan Li, others – University of Turku, Turku, Finland; The Australian National University (ANU), Australia; Commonwealth Scientific and Industrial Research Organisation (CSIRO), University of Canberra, Canberra, Australia; Maynooth University, Maynooth, Ireland; Univ. Grenoble Alpes, CNRS, Grenoble, France; Université Paris-Saclay, Orsay, France; INSERM, France; University of Amsterdam, Amsterdam, Netherlands; Cochrane Netherlands and UMC Utrecht; Julius Center for Health Sciences and Primary Care, Utrecht, Netherlands; University of Strathclyde, Glasgow, UK; Queensland University of Technology, Brisbane, Australia; Vienna University of Technology, Vienna, Austria; Qatar Computing Research Institute, Doha, Qatar","Overview of the CLEF ehealth evaluation lab 2018","https://link.springer.com/chapter/10.1007/978-3-319-98932-7_26","papers","20180101Z00:00:00","","","University of Turku, Turku, Finland; The Australian National University (ANU), Australia; Commonwealth Scientific and Industrial Research Organisation (CSIRO), University of Canberra, Canberra, Australia; Maynooth University, Maynooth, Ireland; Univ. Grenoble Alpes, CNRS, Grenoble, France; Université Paris-Saclay, Orsay, France; INSERM, France; University of Amsterdam, Amsterdam, Netherlands; Cochrane Netherlands and UMC Utrecht; Julius Center for Health Sciences and Primary Care, Utrecht, Netherlands; University of Strathclyde, Glasgow, UK; Queensland University of Technology, Brisbane, Australia; Vienna University of Technology, Vienna, Austria; Qatar Computing Research Institute, Doha, Qatar","ir/search-engine-evaluation, nlp/corpus-construction","This year we introduced clefehealth2018 corpus. This was crated by compiling Web pages of selected domains acquired from the CommonCrawl¹¹. An initial list of Websites was identified for acquisition. The list was built by submitting the CLEF 2018 base queries to the Microsoft Bing APIs (through the Azure Cognitive Services) repeatedly over a period of few weeks¹², and acquiring the URLs of the retrieved results. The domains of the URLs were then included in the list, except some domains that were excluded for decency reasons (e.g. pornhub.com). The list was further augmented by including a number of known reliable health Websites and other known unreliable health Websites, from lists previously compiled by health institutions and agencies. The corpus was divided into folders, by domain name. Each folder contained a file for each Webpage from the domain available in the CommonCrawl dump. In total, 2,021 domains were requested from the CommonCrawl dump of 2018-09¹³. Of the 2,021 domains in total, 1,903 were successfully acquired. The remaining domains were discarded due to errors, corrupted or incomplete data returned by the CommonCrawl API (a total of ten retries were attempted for each domain before giving up on a domain). Of the 1,903 crawled domains, 84 were not available in the CommonCrawl dump, and for these, a folder in the corpus exists and represents the domain that was requested; however, the folder is empty, meaning that it was not available in the dump. Note that .pdf documents were excluded from the data acquired from CommonCrawl. A complete list of domains and size of the crawl data for each domain is available at https://github.com/CLEFeHealth/CLEFeHealth2018IRtask/ blob/master/clef2018collection_listofdomains.txt. The full collection, clefehealth2018¹⁴, it contains 5,535,120 Web pages and its uncompressed size is about 480GB. In addition to the full collection, an alternative corpus named clefehealth2018_B¹⁵ was created by manually removing a number of domains that were not strictly health-related (e.g., news Websites). This subset contains 1,653 domains and its size is about 294GB, uncompressed.","CC-MAIN-2018-09","CLEF-eHealth-2018-IR-task","","" "Shabnam Tafreshi, Mona Diab – George Washington University","Emotion Detection and Classification in a Multigenre Corpus with Joint Multi-Task Deep Learning","http://www.aclweb.org/anthology/C18-1246","papers","20180101Z00:00:00","","","George Washington University","nlp/emotion-detection, nlp/word-embeddings","Our results indicate that common crawl corpus with 2 million words, trained using fastText model has the most word coverage among these genres.","","","GloVe-word-embeddings, fastText-word-embeddings","" "Nicolas Tempelmeier, Elena Demidova, Stefan Dietze – Leibniz Universität Hannover, Germany","Inferring missing categorical information in noisy and sparse web markup","https://arxiv.org/abs/1803.00446","papers","20180101Z00:00:00","","","Leibniz Universität Hannover, Germany","semantic web, linked data","","","","WDC-triples","" "Brian Thompson, Huda Khayrallah, Antonios Anastasopoulos, Arya McCarthy, Kevin Duh, Rebecca Marvin, Paul McNamee, Jeremy Gwinnup, Tim Anderson, Philipp Koehn – Johns Hopkins University, USA; University of Notre Dame, France; Air Force Research Laboratory, USA","Freezing Subnetworks to Analyze Domain Adaptation in Neural Machine Translation","https://arxiv.org/abs/1809.05218","papers","20180101Z00:00:00","","","Johns Hopkins University, USA; University of Notre Dame, France; Air Force Research Laboratory, USA","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Henry S. Thompson, Jian Tong – University of Edinburgh, United Kingdom","Can Common Crawl reliably track persistent identifier (PID) use over time?","https://arxiv.org/abs/1802.01424","papers","20180101Z00:00:00","","","University of Edinburgh, United Kingdom","web-science","","","","","" "Swapna Buccapatnam Tirumala, Ashish Jagmohan, Elham Khabiri, Ta-Hsin Li, Matthew Daniel Riemer, Vadim Sheinin, Aditya Vempaty – International Business Machines Corp.","Facilitating mapping of control policies to regulatory documents","https://patents.google.com/patent/US20180137107A1/en","papers","20180101Z00:00:00","","","International Business Machines Corp.","patent, cc-cited-not-used","The global corpora [203] can comprise a general internet-based collection of texts derived from various sources (e.g., GUTENBERG®, REUTERS®, COMMON CRAWL®, and/or GOOGLE NEWS®).","","","","" "Maksim Tkachenko, Chong Cher Chia, Hady Lauw – Singapore Management University, Singapore","Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings","http://www.aclweb.org/anthology/P18-1112","papers","20180101Z00:00:00","","","Singapore Management University, Singapore","nlp/sentiment-analysis, nlp/word-embeddings, cc-cited-not-used","","","","","" "Marcus Tober, Daniela Neumann – Searchmetrics GmbH","Creation and optimization of resource contents","https://patents.google.com/patent/US20180096067A1/en","papers","20180101Z00:00:00","","","Searchmetrics GmbH","patent, cc-cited-not-used","The crawler module [310] may automatically crawl a network and acquire contents from one or more resources in the network, acquire the contents from an open repository of web crawl data such as CommonCrawl.org.","","","","" "Melanie Tosik, Antonio Mallia, Kedar Gangopadhyay – New York University","Debunking Fake News One Feature at a Time","https://arxiv.org/abs/1808.02831","papers","20180101Z00:00:00","","","New York University","nlp, text classification","Cosine similarity between averaged headline/body Common Crawl vectors","","","?? GloVe-word-embeddings","" "Ke Tran, Yonatan Bisk – University of Amsterdam; University of Washington","Inducing Grammars with and for Neural Machine Translation","https://arxiv.org/abs/1805.10850","papers","20180101Z00:00:00","","","University of Amsterdam; University of Washington","nlp/maschine-translation, nlp/syntax, nlp/grammar-learning, nlp/dependency-grammar","","","","","" "Trieu H Trinh, Quoc V Le – Google Brain","A Simple Method for Commonsense Reasoning","https://arxiv.org/abs/1806.02847","papers","20180101Z00:00:00","","Commonsense reasoning is a long-standing challenge for deep learning. For example, it is difficult to use neural networks to tackle the Winograd Schema dataset [ 1]. In this paper, we present a simple method for commonsense reasoning with neural networks, using unsupervised learning. Key to our method is the use of language models, trained on a massive amount of unlabled data, to score multiple choice questions posed by commonsense reasoning tests. On both Pronoun Disambiguation and Winograd Schema challenges, our models outperform previous state-of-the-art methods by a large margin, without using expensive annotated knowledge bases or hand-engineered features. We train an array of large RNN language models that operate at word or character level on LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and a customized corpus for this task and show that diversity of training data plays an important role in test performance. Further analysis also shows that our system successfully discovers important features of the context that decide the correct answer, indicating a good grasp of commonsense knowledge.","Google Brain","ai/deep-learning, nlp/language-model","In particular, we aggregate documents from the CommonCrawl dataset that has the most overlapping n-grams with the questions. [...] We name this dataset STORIES since most of the constituent documents take the form of a story with long chain of coherent events.","","CC-Stories","","" "Dmitry Ustalov, Alexander Panchenko, Chris Biemann, Simone Paolo Ponzetto – University of Mannheim, Germany; University of Hamburg, Germany; Skolkovo Institute of Science and Technology, Moskva, Russia","Watset: local-global graph clustering with applications in sense and frame induction","https://arxiv.org/abs/1808.06696","papers","20180101Z00:00:00","","","University of Mannheim, Germany; University of Hamburg, Germany; Skolkovo Institute of Science and Technology, Moskva, Russia","nlp/dependency-parsing, nlp/semantics, nlp/synonymy, nlp/frames-semantics, graph-clustering, web-mining","For the evaluation purposes, we operate on the intersection of triples from DepCC and FrameNet.","","","depcc","" "Dmitry Ustalov, Alexander Panchenko, Chris Biemann, Simone Paolo Ponzetto – University of Mannheim, Germany; University of Hamburg, Germany","Unsupervised sense-aware hypernymy extraction","https://arxiv.org/abs/1809.06223","papers","20180101Z00:00:00","","","University of Mannheim, Germany; University of Hamburg, Germany","nlp/semantics, nlp/hypernymy, web-mining","","","","","WDC-WebIsADb" "Dmitry Ustalov, Alexander Panchenko, Andrei Kutuzov, Chris Biemann, Simone Paolo Ponzetto – University of Mannheim, Germany; University of Hamburg, Germany; University of Oslo, Norway","Unsupervised semantic frame induction using triclustering","https://arxiv.org/abs/1805.04715","papers","20180101Z00:00:00","","","University of Mannheim, Germany; University of Hamburg, Germany; University of Oslo, Norway","nlp/dependency-parsing, nlp/semantics, nlp/synonymy, nlp/frames-semantics, graph-clustering, web-mining","In our evaluation, we use triple frequencies from the DepCC dataset (Panchenkoet al., 2018) , which is a dependency-parsed version of the Common Crawl corpus, and the standard 300-dimensional word embeddings model trained on the Google News corpus (Mikolovet al., 2013). [...] For the evaluation purposes, we operate on the intersection of triples from DepCC and FrameNet.","","","depcc","" "Hal Varian – National Bureau of Economic Research, Cambridge, MA, USA","Artificial intelligence, economics, and industrial organization","https://www.nber.org/papers/w24839","papers","20180101Z00:00:00","","Machine learning (ML) and artificial intelligence (AI) have been around for many years. However, in the last 5 years, remarkable progress has been made using multilayered neural networks in diverse areas such as image recognition, speech recognition, and machine translation. AI is a general purpose technology that is likely to impact many industries. In this chapter I consider how machine learning availability might affect the industrial organization of both firms that provide AI services and industries that adopt AI technology. My intent is not to provide an extensive overview of this rapidly-evolving area, but instead to provide a short summary of some of the forces at work and to describe some possible areas for future research.","National Bureau of Economic Research, Cambridge, MA, USA","economy","","","","","" "Vivek Vinayan, Kumar M Anand, K P Soman – Amrita School of Engineering, India","AmritaNLP at SemEval-2018 Task 10: Capturing discriminative attributes using convolution neural network over global vector representation.","http://www.aclweb.org/anthology/S18-1166","papers","20180101Z00:00:00","","","Amrita School of Engineering, India","nlp/semantics, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Yogarshi Vyas, Xing Niu, Marine Carpuat – Department of Computer Science, University of Maryland","Identifying Semantic Divergences in Parallel Text without Annotations","https://arxiv.org/abs/1803.11112","papers","20180101Z00:00:00","","","Department of Computer Science, University of Maryland","nlp/machine-translation","","","","{?Ngrams-LMs-2013}","" "Changhan Wang, Kyunghyun Cho, Douwe Kiela – Facebook AI Research; New York University","Code-Switched Named Entity Recognition with Embedding Attention","http://www.aclweb.org/anthology/W18-3221","papers","20180101Z00:00:00","","","Facebook AI Research; New York University","nlp/named-entity-recognition, nlp/word-embeddings","","","","fastText-word-embeddings","" "Renzhi Wang, Mizuho Iwaihara – Graduate School of Information, Production and Systems, Waseda University Japan","Detection of mergeable Wikipedia articles based on overlapping topics","db-event.jpn.org/deim2018/data/papers/157.pdf","papers","20180101Z00:00:00","","","Graduate School of Information, Production and Systems, Waseda University Japan","nlp/word-embeddings, ir/duplicate-detection","","","","GloVe-word-embeddings","" "Mingxuan Wang, Jun Xie, Zhixing Tan, Jinsong Su, Deyi Xiong, Chao Bian – Mobile Internet Group, Tencent Technology Co., Ltd; Xiamen University, China; Soochow University, China","Neural Machine Translation with Decoding History Enhanced Attention","https://www.aclweb.org/anthology/C18-1124","papers","20180101Z00:00:00","","","Mobile Internet Group, Tencent Technology Co., Ltd; Xiamen University, China; Soochow University, China","nlp/machine-translation, cc-cited-not-used","","","","","" "Zhuxiaona Wei, Thuan Nguyen, Iat Chan, Kenny M Liou, Helin Wang, Houchang Lu – Baidu USA LLC","Systems and methods for improved user interface","https://patents.google.com/patent/US20180011688A1/en","papers","20180101Z00:00:00","","","Baidu USA LLC","patent, ir/user-interface","For English, in embodiments, the language model is a Kneser-Ney smoothed 5-gram model with pruning that is trained using the KenLM toolkit on cleaned text from the Common Crawl Repository. The vocabulary is the most frequently used 400,000 words from 250 million lines of text, which produces a language model with about 850 million n-grams.","","","","" "John Wieting, Kevin Gimpel – Carnegie Mellon University, Pittsburgh, PA, USA; Toyota Technological Institute at Chicago, IL, USA","Paranmt-50m: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations","http://www.aclweb.org/anthology/P18-1042","papers","20180101Z00:00:00","","","Carnegie Mellon University, Pittsburgh, PA, USA; Toyota Technological Institute at Chicago, IL, USA","nlp/machine-translation, nlp/sentence-paraphrase, nlp/sentence-embeddings","","","WMT-16-translation-task-common-crawl-corpus","","" "Genta Indra Winata, Chien-Sheng Wu, Andrea Madotto, Pascale Fung – Hong Kong University of Science and Technology, Hong Kong","Bilingual Character Representation for Efficiently Addressing Out-of-Vocabulary Words in Code-Switching Named Entity Recognition","https://arxiv.org/abs/1805.12061","papers","20180101Z00:00:00","","","Hong Kong University of Science and Technology, Hong Kong","nlp/named-entity-recognition, nlp/word-embeddings","","","","fastText-word-embeddings","" "Ziang Xie, Guillaume Genthial, Stanley Xie, Andrew Ng, Dan Jurafsky – Stanford University, USA","Noising and Denoising Natural Language: Diverse Backtranslation for Grammar Correction","http://www.aclweb.org/anthology/N18-1057","papers","20180101Z00:00:00","","","Stanford University, USA","nlp/machine-translation, nlp/grammatical-error-correction","","","","Ngrams-LMs-2013","" "Hao Xiong, Zhongjun He, Xiaoguang Hu, Hua Wu – Baidu Inc., China","Multi-channel encoder for neural machine translation","https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewPaper/16788","papers","20180101Z00:00:00","","","Baidu Inc., China","nlp/machine-translation","","","","WMT-16-translation-task-common-crawl-corpus","" "Steven Xu, Andrew Bennett, Doris Hoogeveen, Jey Han Lau, Timothy Baldwin – University of Melbourne, Australia","Preferred Answer Selection in Stack Overflow: Better Text Representations... and Metadata, Metadata, Metadata","https://www.aclweb.org/anthology/W18-6119","papers","20180101Z00:00:00","","","University of Melbourne, Australia","information retrieval, nlp/question-answering, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Hua Yang, Teresa Gonçalves – University of Èvora, Portugal; ZhongYuan University of Technology, Zhengzhou, China","Improving personalized consumer health search: notebook for ehealth at clef 2018","http://ceur-ws.org/Vol-2125/paper_195.pdf","papers","20180101Z00:00:00","","","University of Èvora, Portugal; ZhongYuan University of Technology, Zhengzhou, China","ir/multilingual-information-retrieval, ir/biomedical-information-extraction, ir/query-expansion, ir/learning-to-rank, nlp/word-embeddings","","","","CLEF-eHealth-2018-IR-task","" "Thanos Yannakis, Pavlos Fafalios, Yannis Tzitzikas – University of Crete, Greece; Leibniz University of Hannover, Germany","Heuristics-based Query Reordering for Federated Queries in SPARQL 1.1 and SPARQL-LD","http://ceur-ws.org/Vol-2110/paper7.pdf","papers","20180101Z00:00:00","","","University of Crete, Greece; Leibniz University of Hannover, Germany","semantic web, linked data, SparQL","","","","WebDataCommons","" "Evi Yulianti, Ruey-Cheng Chen, Falk Scholer, W Bruce Croft, Mark Sanderson – RMIT University, Melbourne, Australia; SEEK Ltd., Melbourne, Australia","Ranking Documents by Answer-Passage Quality","http://marksanderson.org/publications/my_papers/SIGIR2018a.pdf","papers","20180101Z00:00:00","","","RMIT University, Melbourne, Australia; SEEK Ltd., Melbourne, Australia","information retrieval, nlp/question-answering, cc-cited-not-used","","","","","" "Siwar Zayani, Nesrine Ksentini, Mohamed Tmar, Faiez Gargouri – University of Sfax, Tunisia","Miracl at clef 2018: Consumer health search task","http://ceur-ws.org/Vol-2125/paper_141.pdf","papers","20180101Z00:00:00","","","University of Sfax, Tunisia","ir/multilingual-information-retrieval, ir/biomedical-information-extraction, ir/query-expansion","","","","CLEF-eHealth-2018-IR-task","" "Neil Zeghidour, Qiantong Xu, Vitaliy Liptchinsky, Nicolas Usunier, Gabriel Synnaeve, Ronan Collobert – Facebook A.I. Research, Paris, France; Facebook A.I. Research, New York & Menlo Park, USA; CoML, ENS/CNRS/EHESS/INRIA/PSL Research University, Paris, France","Fully convolutional speech recognition","https://arxiv.org/abs/1812.06864","papers","20180101Z00:00:00","","","Facebook A.I. Research, Paris, France; Facebook A.I. Research, New York & Menlo Park, USA; CoML, ENS/CNRS/EHESS/INRIA/PSL Research University, Paris, France","nlp/speech-recognition","(12k training hours AM, common crawl LM)","","","??","" "Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi – University of Washington, USA","Swag: A large-scale adversarial dataset for grounded commonsense inference","https://arxiv.org/abs/1808.05326","papers","20180101Z00:00:00","","","University of Washington, USA","ai/reasoning, nlp/text-generation, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Meilin Zhan, Roger Levy – Massachusetts Institute of Technology, USA","Comparing Theories of Speaker Choice Using a Model of Classifier Production in Mandarin Chinese","http://www.aclweb.org/anthology/N18-1181","papers","20180101Z00:00:00","","","Massachusetts Institute of Technology, USA","nlp/syntax, nlp/corpus-lingustics, nlp/paraphrasing","","","","","WMT-13-translation-task-common-crawl-corpus" "Yunming Zhang, Mengjiao Yang, Riyadh Baghdadi, Shoaib Kamil, Julian Shun, Saman P. Amarasinghe – MIT CSAIL; Adobe Research","GraphIt - A High-Performance DSL for Graph Analytics","http://arxiv.org/abs/1805.00923","papers","20180101Z00:00:00","","","MIT CSAIL; Adobe Research","graph-processing","","","","WDC-hyperlinkgraph","" "Pengqing Zhang, Yuexian Hou, Zhan Su, Yi Su – Tianjin University, China","Two-Step Multi-factor Attention Neural Network for Answer Selection","https://link.springer.com/chapter/10.1007/978-3-319-97304-3_50","papers","20180101Z00:00:00","","","Tianjin University, China","nlp/answer-selection, ai/neural-networks, nlp/word-embeddings","","","","GloVe-word-embeddings","" "Ji Zhang, Leonard Tan, Xiaohui Tao, Xiaoyao Zheng, Yonglong Luo, Jerry Chun-Wei Lin – University of Southern Queensland, Australia; Anhui Normal University, Wuhu, China; Harbin Institute of Technology Shenzhen Graduate School, Shenzhen, China","SLIND: Identifying Stable Links in Online Social Networks","https://link.springer.com/chapter/10.1007/978-3-319-91458-9_54","papers","20180101Z00:00:00","","","University of Southern Queensland, Australia; Anhui Normal University, Wuhu, China; Harbin Institute of Technology Shenzhen Graduate School, Shenzhen, China","web-science/hyperlinkgraph, web-science/social-networks","The dataset chosen for this study, as well as for the demo, was crawled from Facebook and obtained from the repositories of the Common Crawl (August 2016).","CC-MAIN-2016-36","","","" "Ji Zhang, Xiaohui Tao, Leonard Tan, Jerry Chun-Wei Lin, Hongzhou Li, Liang Chang – University of Southern Queensland, Toowoomba, Australia; Harbin Institute of Technology Shenzhen Graduate School, Shenzhen, China; Guilin University of Electronic Technology, Guilin, China; Guilin University of Electronic Technology, Guilin, China","On Link Stability Detection for Online Social Networks","https://link.springer.com/chapter/10.1007/978-3-319-98809-2_20","papers","20180101Z00:00:00","link stability, graph theory, online social networks","","University of Southern Queensland, Toowoomba, Australia; Harbin Institute of Technology Shenzhen Graduate School, Shenzhen, China; Guilin University of Electronic Technology, Guilin, China; Guilin University of Electronic Technology, Guilin, China","graph-processing, social networks","Since the social network we obtain from the repositories of common crawl contains missing links and partial information, stochastic estimations are …","","","","" "Biao Zhang, Deyi Xiong, Jinsong Su – Xiamen University, China; Soochow University, China","Neural Machine Translation with Deep Attention","https://ieeexplore.ieee.org/abstract/document/8493282","papers","20180101Z00:00:00","","","Xiamen University, China; Soochow University, China","nlp/machine-translation","","","","","" "Biao Zhang, Deyi Xiong, Jinsong Su, Qian Lin, Huiji Zhang – Xiamen University, China; Soochow University, China; Xiamen Meiya Pico information Co., Ltd. Xiamen, China","Simplifying Neural Machine Translation with Addition-Subtraction Twin-Gated Recurrent Networks","https://arxiv.org/abs/1810.12546","papers","20180101Z00:00:00","","","Xiamen University, China; Soochow University, China; Xiamen Meiya Pico information Co., Ltd. Xiamen, China","nlp/machine-translation, cc-cited-not-used","","","","","" "Nils Brügger, Ian Milligan – Aarhus University, Denmark; University of Waterloo, Canada","The SAGE Handbook of Web History","https://us.sagepub.com/en-us/nam/the-sage-handbook-of-web-history/book252251","papers","20190101Z00:00:00","","","Aarhus University, Denmark; University of Waterloo, Canada","web-science, web history","","","","","" "Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, Edouard Grave – Facebook AI","CCNet: Extracting high quality monolingual datasets from web crawl data","https://arxiv.org/abs/1911.00359","papers","20190101Z00:00:00","","Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.","Facebook AI","nlp/corpus-construction, nlp/web-as-corpus, nlp/low-resource-language","[about https://github.com/facebookresearch/cc_net] In this paper, we present a data collection pipeline that allows to gather massive monolingual corpora of high quality in a variety of languages, including many low-resource ones. The principles of our pipeline are general and we show the results of its application to data collected by the Common Crawl project.¹ Common Crawl is a massive non-curated dataset of webpages in many languages, mixed together in temporal snapshots of the web.","","CCNet","","" "A. Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, Ilya Sutskever – OpenAI, San Francisco, California, United States","Language models are unsupervised multitask learners","https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe","papers","20190101Z00:00:00","","","OpenAI, San Francisco, California, United States","cc-cited-not-used","A promising source of diverse and nearly unlimited text is web scrapes such as Common Crawl. While these archives are many orders of magnitude larger than current language modeling datasets, they have significant data quality issues. Trinh & Le (2018) used Common Crawl in their work on commonsense reasoning but noted a large amount of documents “whose content are mostly unintelligible”. We ob-served similar data issues in our initial experiments with Common Crawl. Trinh & Le (2018)’s best results were achieved using a small subsample of Common Crawl which included only documents most similar to their target dataset,the Winograd Schema Challenge. While this is a pragmatic approach to improve performance on a specific task, we want to avoid making assumptions about the tasks to be performed ahead of time.Instead, we created a new web scrape which emphasizes document quality. To do this we only scraped web pages which have been curated/filtered by humans. Manually filtering a full web scrape would be exceptionally expensive so as a starting point, we scraped all outbound links from Reddit, a social media platform, which received at least 3 karma. This can be thought of as a heuristic indicator for whether other users found the link interesting, educational, or just funny. The resulting dataset, WebText, contains the text subsetof these 45 million links.","","","","" "Pedro Javier Ortiz Suárez, Benoît Sagot, Laurent Romary – Inria, Paris, France; Sorbonne University, Paris, France","Asynchronous pipeline for processing huge corpora on medium to low resource infrastructures","https://hal.inria.fr/hal-02148693","papers","20190101Z00:00:00","","","Inria, Paris, France; Sorbonne University, Paris, France","nlp/corpus-construction","We use the November 2018 snapshot which surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header. From now on, when we mention the “Common Crawl” corpus, we refer to this particular November 2018 snapshot.","CC-MAIN-2018-47 (WET)","OSCAR","","" "Dominik Mottl – Hochschule Darmstadt, Germany","Multi-Label Branchenklassifikation von Web-Texten","https://fbmn.h-da.de/uploads/Themen/WS18_thesis_mottl.pdf","papers","20190101Z00:00:00","","","Hochschule Darmstadt, Germany","nlp/NER, entity-linking","NER of company names and linking to DBpedia performed on English texts in 712 WET files of November 2018 crawl (CC-MAIN-2018-47) using cc-pyspark.","","","","" "Sebastian Nagel – Common Crawl, USA","Accessing WARC files via SQL","https://digital.library.unt.edu/ark:/67531/metadc1608961/","papers","20190101Z00:00:00","","","Common Crawl, USA","web-archiving, SQL, Parquet","","cc-index-table","","","" "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov – Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA; Facebook AI","RoBERTa: A Robustly Optimized BERT Pretraining Approach","https://arxiv.org/abs/1907.11692","papers","20190101Z00:00:00","","","Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA; Facebook AI","nlp/corpus-construction, nlp/language-model","We find that BERT was significantly undertrained and propose an improved recipe for training BERT models, which we call RoBERTa, that can match or exceed the performance of all of the post-BERT methods. Our modifications are simple, they include: (1) training the model longer, with bigger batches, over more data; (2) removing the next sentence prediction objective; (3) training on longer sequences; and (4) dynamically changing the masking pattern applied to the training data. We also collect a large new dataset (CC-NEWS) of comparable size to other privately used datasets, to better control for training set size effects. [...] CC-NEWS, which we collected from the English portion of the CommonCrawl News dataset (Nagel, 2016). The data contains 63 million English news articles crawled between September 2016 and February 2019. (76GB after filtering).⁴ [⁴ We use news-please (Hamborg et al.,2017) to collect and extract CC-NEWS. CC-NEWS is similar to the REALNEWS dataset described in Zellers et al. (2019).]","CC-NEWS","CC-NEWS-RoBERTa","","" "Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi – University of Washington, USA; Allen Institute for Artificial Intelligence, USA","Defending against neural fake news","http://papers.nips.cc/paper/9106-defending-against-neural-fake-news.pdf","papers","20190101Z00:00:00","","","University of Washington, USA; Allen Institute for Artificial Intelligence, USA","nlp/language-model, nlp/fake-news-detection, nlp/text-classification, misinformation, disinformation","Dataset. We present RealNews, a large corpus of news articles from Common Crawl. Training Grover requires a large corpus of news articles with metadata, but none currently exists. Thus, we construct one by scraping dumps from Common Crawl, limiting ourselves to the 5000 news domains indexed by Google News. We used the Newspaper Python library to extract the body and meta-data from each article. News from Common Crawl dumps from December 2016 through March 2019 were used as training data; articles published in April 2019 from the April 2019 dump were used for evaluation. After deduplication, RealNews is 120 gigabytes without compression. [...] Obtaining the data required through Common Crawl cost \$10k in AWS credits and can be massively parallelized over many CPUs. [...]","","Grover-RealNews","","" "Giulio Ermanno Pibiri, Matthias Petri, Alistair Moffat – University of Melbourne, Australia; University of Pisa, Italy; ISTI-CNR, Pisa, Italy","Fast Dictionary-Based Compression for Inverted Indexes","https://dl.acm.org/citation.cfm?id=3290962","papers","20190101Z00:00:00","","Dictionary-based compression schemes provide fast decoding operation, typically at the expense of reduced compression effectiveness compared to statistical or probability-based approaches. In this work, we apply dictionary-based techniques to the compression of inverted lists, showing that the high degree of regularity that these integer sequences exhibit is a good match for certain types of dictionary methods, and that an important new trade-off balance between compression effectiveness and compression efficiency can be achieved. Our observations are supported by experiments using the document-level inverted index data for two large text collections, and a wide range of other index compression implementations as reference points. Those experiments demonstrate that the gap between efficiency and effectiveness can be substantially narrowed.","University of Melbourne, Australia; University of Pisa, Italy; ISTI-CNR, Pisa, Italy","information-retrieval/search-engine, information-retrieval/inverted-index","We use the standard Gov2 collection containing 426 GiB of text; and CCNEWS, an English subset of the freely available NEWS subset of the CommonCrawl¹ [¹http://commoncrawl.org/2016/10/news-dataset-available/], consisting of news articles in the period 09/01/16 to 30/03/18, following the methodology of Petri and Moffat [28].","CC-NEWS","","","" "Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin – Facebook AI","CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB","https://arxiv.org/abs/1911.04944","papers","20190101Z00:00:00","Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences","We show that margin-based bitext mining in a multilingual sentence space can be applied to monolingual corpora of billions of sentences. We are using ten snapshots of a curated common crawl corpus (Wenzek et al., 2019), totalling 32.7 billion unique sentences. Using one unified approach for 38 languages, we were able to mine 4.5 billions parallel sentences, out of which 661 million are aligned with English. 20 language pairs have more then 30 million parallel sentences, 112 more then 10 million, and most more than one million, including direct alignments between many European or Asian languages.","Facebook AI","nlp/corpus-construction, nlp/parallel-corpus, nlp/machine-translation","The curated Common Crawl corpus¶ In this work, we propose to mine parallel sentences from the Web, by using the data released by the Common Crawl project.[⁵https://commoncrawl.org/] Each month, a snapshot of the Web containing terabytes of web pages in various languages is obtained by randomly exploring URLs. We start by applying some preprocessing steps to the raw text data, following the pipeline introduced by Wenzek et al. (2019) and leading to the CCNet dataset. The first step is to deduplicate the data at the paragraph level, as the original crawls contain up to 70% of duplicated data. This preprocessing removes low quality content, such as boilerplate, navigation menus or cookie warnings. The second step of the pipeline is to identify the language of each document, using fastText6 (Grave et al., 2018). This language identifier uses a linear classifier with character n-gram features, and can recognize up to 176 languages. Finally, the last step of the preprocessing is to filter low quality content by training a language model on Wikipedia, and only keeping documents with a low perplexity score. We refer the reader to Wenzek et al. (2019) for more details about this pre- processing pipeline. In Figure 1, we report the number of unique sentences obtained after preprocessing ten snapshots from Common Crawl. We currently process 38 languages. The English Web content is abundant and we used only one snapshot.","","CCMatrix","","" "Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc'Aurelio Ranzato, Arthur Szlam – Facebook AI Research; Harvard University, USA","Real or Fake? Learning to Discriminate Machine from Human Generated Text","https://arxiv.org/abs/1906.03351","papers","20190101Z00:00:00","","","Facebook AI Research; Harvard University, USA","nlp/text-classification","CCNews: We collect a de-duplicated subset of the English portion of the CommonCrawl news dataset (Nagel, 2016) [Sebastian Nagel. Cc-news. http://web.archive.org/save/http://commoncrawl. org/2016/10/news-dataset-available/, 2016.], which totals around 16 Billion words.","CC-NEWS","CCNews (Bakhtin, et al. 2019)","","" "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le – Carnegie Mellon University, Google AI Brain Team","XLNet: Generalized Autoregressive Pretraining for Language Understanding","https://arxiv.org/abs/1906.08237","papers","20190101Z00:00:00","","","Carnegie Mellon University, Google AI Brain Team","nlp/transformer-language-model","Following BERT [10], we use the BooksCorpus [40] and English Wikipedia as part of our pretraining data, which have 13GB plain text combined. In addition, we include Giga5 (16GB text) [26], ClueWeb 2012-B (extended from 5]), and Common Crawl [6] for pretraining. We use heuristics to aggressively filter out short or low-quality articles for ClueWeb 2012-B and Common Crawl, which results in 19GB and 110GB text respectively.","","","","" "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov – Facebook AI","Unsupervised Cross-lingual Representation Learning at Scale","https://arxiv.org/abs/1911.02116","papers","20190101Z00:00:00","Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences","This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross- lingual transfer tasks. We train a Transformer- based masked language model on one hundred languages, using more than two terabytes of fil- tered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6\% average accu- racy on XNLI, +13\% average F1 score on MLQA, and +2.4\% F1 score on NER. XLM-R performs particularly well on low-resource lan- guages, improving 15.7\% in XNLI accuracy for Swahili and 11.4\% for Urdu over previ- ous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and ca- pacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per- language performance; XLM-R is very compet- itive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code, data and models publicly available.","Facebook AI","nlp/corpus-construction, nlp/web-as-corpus, nlp/language-model","Following Wenzek et al. (2019) 2, we build a clean CommonCrawl Corpus in 100 languages. [...] In this work, we introduced XLM-R, our new state of the art multilingual masked language model trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages.","","CC-100","CCNet","" "Joel Mackenzie, Rodger Benham, Matthias Petri, Johanne R. Trippas, J. Shane Culpepper, Alistair Moffat – The University of Melbourne, Melbourne, Australia; RMIT University, Melbourne, Australia; Amazon Alexa, Manhattan Beach, CA, USA","CC-News-En: A large English news corpus","https://doi.org/10.1145/3340531.3412762","papers","20200101Z00:00:00","corpus, user query variations, collection, news search, crowdsourcing","We describe a static, open-access news corpus using data from the Common Crawl Foundation, who provide free, publicly available web archives, including a continuous crawl of international news articles published in multiple languages. Our derived corpus, CC-News-En, contains 44 million English documents collected between September 2016 and March 2018. The collection is comparable in size with the number of documents typically found in a single shard of a large-scale, distributed search engine, and is four times larger than the news collections previously used in offline information retrieval experiments. To complement the corpus, 173 topics were curated using titles from Reddit threads, forming a temporally representative sampling of relevant news topics over the 583 day collection window. Information needs were then generated using automatic summarization tools to produce textual and audio representations, and used to elicit query variations from crowdworkers, with a total of 10,437 queries collected against the 173 topics. Of these, 10,089 include key-stroke level instrumentation that captures the timings of character insertions and deletions made by the workers while typing their queries. These new resources support a wide variety of experiments, including large-scale efficiency exercises and query auto-completion synthesis, with scope for future addition of relevance judgments to support offline effectiveness experiments and hence batch evaluation campaigns.","The University of Melbourne, Melbourne, Australia; RMIT University, Melbourne, Australia; Amazon Alexa, Manhattan Beach, CA, USA","nlp/text-corpora, nlp/corpus-construction, ir/information-extraction","Our derived corpus, CC-News-En, contains 44 million English documents collected between September 2016 and March 2018. [...] One such example is the CommonCrawl Foundation,[¹ ] who generate large-scale crawls of the web at regular intervals. A key philosophy behind the Common Crawlis to democratize data, allowing open access with no fees. In late 2016, the Common Crawl Foundation announced a news-specific crawl (CC-News), [² ] with documents being added on a daily basis, and covering sources from a wide range of countries and languages. Here we derive a static, English segment of the CC-Newscrawl that we refer to as CC-News-En. Due to the storage and computation costs involved in filtering out non-English documents, we make the complete corpus available as a free resource, along with asuite of tools which can be used to replicate corpus extraction from the original source CC-News data. We also provide a set of 10,437 user query variations over 173 query topics, including keystroke-level data collected from a novel crowdworking experiment. Our goal is to encourage reproducible and replicable experimentation, with greatly reduced barriers to entry. [...] A total of 2,291 CC-News WARC files were processed to build CC-News-En, covering the period 26 August 2016 to 31 March 2018, inclusive. The first and last WARC files inthis collection are as follows: •CC-NEWS-20160826124520-00000.warc.gz •CC-NEWS-20180331191315-00143.warc.gz The resulting subset of compressed WARC files occupies 2.14 TiB of disk space, and contains a total of 102.5 million documents in over 100 languages. [...] Missing Documents and Temporal Gaps. During the creation of the collection, the CC-NEWS-20170812163812-00038.warc.gz file was not processed correctly by our pipeline, and was subsequently dropped from the CC-News-En corpus. In addition, there are six days within the 583 day period where no WARC files were added to the original CC-News crawl: 22/09/2016 – 25/09/2016 inclusive, 18/12/2017, and 22/12/2017. These gaps typically correspond to hardware and software upgrades on the crawl servers.[¹⁸ Private correspondence with Common Crawl Engineers.] It is also important to note that both CC-News and CC-News-En are not intended to be complete crawls of their sources, but rather, to provide a reproducible sample of these sites.","CC-NEWS","","","" "Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzmán, Philipp Koehn – Facebook AI; Johns Hopkins University","CCAligned: A Massive collection of cross-lingual web-document pairs","https://www.aclweb.org/anthology/2020.emnlp-main.480","papers","20200101Z00:00:00","","Cross-lingual document alignment aims to identify pairs of documents in two distinct languages that are of comparable content or translations of each other. In this paper, we exploit the signals embedded in URLs to label web documents at scale with an average precision of 94.5{\%} across different language pairs. We mine sixty-eight snapshots of the Common Crawl corpus and identify web document pairs that are translations of each other. We release a new web dataset consisting of over 392 million URL pairs from Common Crawl covering documents in 8144 language pairs of which 137 pairs include English. In addition to curating this massive dataset, we introduce baseline methods that leverage cross-lingual representations to identify aligned documents based on their textual content. Finally, we demonstrate the value of this parallel documents dataset through a downstream task of mining parallel sentences and measuring the quality of machine translations from models trained on this mined data. Our objective in releasing this dataset is to foster new research in cross-lingual NLP across a variety of low, medium, and high-resource languages.","Facebook AI; Johns Hopkins University","nlp/machine-translation, nlp/text-corpora, nlp/parallel-corpus, nlp/cross-lingual-document-alignment","[...] we exploit the signals embedded in URLs to label web documents at scale with an average precision of 94.5{\%} across different language pairs. We mine sixty-eight snapshots of the Common Crawl corpus and identify web document pairs that are translations of each other. We release a new web dataset consisting of over 392 million URL pairs from Common Crawl covering documents in 8144 language pairs of which 137 pairs include English. [...] Starting from 68 Common Crawl snapshots with a raw document count of 169.4 billion documents, upon deduplication, the resultant corpus is approximately 29.6 billion web documents from 107.8 million distinct web domains – a 83{\%} reduction from the raw corpus.","","CCAligned-2020","","" "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei – Johns Hopkins University; OpenAI","Language models are few-shot learners","https://arxiv.org/abs/2005.14165","papers","20200101Z00:00:00","","","Johns Hopkins University; OpenAI","nlp/language-model, ai/deep-learning, nlp/autoregressive-transformer-language-model, nlp/question-answering, nlp/machine-translation, nlp/text-generation","Datasets for language models have rapidly expanded, culminating in the Common Crawl dataset [...] constituting nearly a trillion words. [...] However, we have found that unfiltered or lightly filtered versions of Common Crawl tend to have lower quality than more curated datasets. Therefore, we took 3 steps to improve the average quality of our datasets: (1) we downloaded and filtered a version of CommonCrawl based on similarity to a range of high-quality reference corpora, (2) we performed fuzzy deduplication at the document level, within and across datasets, to prevent redundancy and preserve the integrity of our held-out validation set as an accurate measure of overfitting, and (3) we also added known high-quality reference corpora to the training mix to augment CommonCrawl and increase its diversity. Details of the first two points (processing of Common Crawl) are described in Appendix A.","","","","" "Metod Jazbec, Barna Pásztor, Felix Faltings, Nino Antulov-Fantulin, Petter N. Kolm – ETH Zurich, Switzerland; New York University, New York, USA","On the impact of publicly available news and information transfer to financial markets","https://arxiv.org/abs/2010.12002","papers","20200101Z00:00:00","","We quantify the propagation and absorption of large-scale publicly available news articles from the World Wide Web to financial markets. To extract publicly available information, we use the news archives from the Common Crawl, a nonprofit organization that crawls a large part of the web. We develop a processing pipeline to identify news articles associated with the constituent companies in the S&P 500 index, an equity market index that measures the stock performance of U.S. companies. Using machine learning techniques, we extract sentiment scores from the Common Crawl News data and employ tools from information theory to quantify the information transfer from public news articles to the U.S. stock market. Furthermore, we analyze and quantify the economic significance of the news-based information with a simple sentiment-based portfolio trading strategy. Our findings provides support for that information in publicly available news on the World Wide Web has a statistically and economically significant impact on events in financial markets.","ETH Zurich, Switzerland; New York University, New York, USA","statistical-finance, ai/machine-learning, nlp/sentiment-analysis","In this article, we use news articles from the Common Crawl News, a subset of the Common Crawl’s petabytes of publicly available World Wide Web archives, to measure the impact of the arrival of new information about the constituent stocks in the S&P 500 index at the time of publishing. To the best of our knowledge, our study is the first one to use the Common Crawl in this way. We develop a cloud-based processing pipeline that identifies news articles in the Common Crawl News data that are related to the companies in the S&P 500. As the Common Crawl public data archives are getting bigger, they are opening doors for many real-world “data-hungry” applications such as transformers models GPT49 and BERT50, a recent class of deep learning language models. We believe that public sources of news data is important not only for natural language processing (NLP) and finance communities but also for more general studies in complex systems and computational social sciences that are aiming to characterize (mis)information propagation and dynamics in techno-socio-economic systems. The abundance of high-frequency data around the financial systems enables complex systems researchers to have microscopic observables that allow verification of different models, theories, and hypotheses.","CC-NEWS","","","" "Marco Squarcina, Mauro Tempesta, Lorenzo Veronese, Stefano Calzavara, Matteo Maffei – TU Wien, Austria; Università Ca’ Foscari Venezia, Italy","Can I take your subdomain? Exploring related-domain attacks in the modern web","https://arxiv.org/abs/2012.01946","papers","20200101Z00:00:00","","","TU Wien, Austria; Università Ca’ Foscari Venezia, Italy","computer-security/internet-security, related-domain attacks","Our web security analysis aims at quantifying the number of domains hosting web applications that can be exploited by taking over the vulnerable domains discovered by RDScan. In particular, for every apex domain with at least one vulnerable subdomain, we selected from the CommonCrawl dataset [¹⁹ Common Crawl. Host- and domain-level webgraphs feb/mar/may 2020. https://commoncrawl.org/2020/06/host-and-domain-level-web-graphs-febmarmay-2020/, 2020.] the list of 200 most popular related-domains according to the Pagerank score [11]. From the homepage of these domains,we extracted the same-origin links that appear in the HTML code.","hyperlinkgraph/cc-main-2020-feb-mar-may/hostgraph","","","" "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu – Google, Mountain View, CA, USA","Exploring the limits of transfer learning with a unified text-to-text transformer","http://jmlr.org/papers/v21/20-074.html","papers","20200101Z00:00:00","","Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.","Google, Mountain View, CA, USA","nlp/corpus-construction, nlp/language-model","We also introduce our approach for treating every problem as a text-to-text task and describe our “Colossal Clean Crawled Corpus” (C4), the Common Crawl-based data set we created as a source of unlabeled text data. [...] Common Crawl is a publicly-available web archive that provides “web extracted text” by removing markup and other non-text content from the scraped HTML files. This process produces around 20TB of scraped text data each month. Unfortunately, the majority of the resulting text is not natural language. Instead, it largely comprises gibberish or boiler-plate text like menus, error messages, or duplicate text. Furthermore, a good deal of the scraped text contains content that is unlikely to be helpful for any of the tasks we consider (offensive language, placeholder text, source code, etc.). To address these issues, we used the following heuristics for cleaning up Common Crawl’s web extracted text: [...] To assemble our base data set, we downloaded the web extracted text from April 2019 and applied the aforementioned filtering. This produces a collection of text that is not only orders of magnitude larger than most data sets used for pre-training (about 750 GB) but also comprises reasonably clean and natural English text. We dub this data set the “Colossal Clean Crawled Corpus” (or C4 for short) and release it as part of TensorFlow Datasets.⁸ [⁸https://www.tensorflow.org/datasets/catalog/c4]","CC-MAIN-2019-18 (WET)","Tensorflow-C4","","" "Jay M. Patel – Specrom Analytics, Ahmedabad, India","Getting structured data from the internet","https://www.apress.com/gp/book/9781484265758","papers","20200101Z00:00:00","","","Specrom Analytics, Ahmedabad, India","web-mining","[Chapter 6: Introduction to Common Crawl Datasets + Chapter 7: Web Crawl Processing on Big Data Scale]","","","","" "Jonathan Dunn – University of Canterbury, Christchurch, New Zealand","Mapping languages: The Corpus of Global Language Use","https://doi.org/10.1007/s10579-020-09489-2","papers","20200101Z00:00:00","","This paper describes a web-based corpus of global language use with a focus on how this corpus can be used for data-driven language mapping. First, the corpus provides a representation of where national varieties of major languages are used (e.g., English, Arabic, Russian) together with consistently collected data for each variety. Second, the paper evaluates a language identification model that supports more local languages with smaller sample sizes than alternative off-the-shelf models. Improved language identification is essential for moving beyond majority languages. Given the focus on language mapping, the paper analyzes how well this digital language data represents actual populations by (i) systematically comparing the corpus with demographic ground-truth data and (ii) triangulating the corpus with an alternate Twitter-based dataset. In total, the corpus contains 423 billion words representing 148 languages (with over 1 million words from each language) and 158 countries (again with over 1 million words from each country), all distilled from Common Crawl web data. The main contribution of this paper, in addition to describing this publicly-available corpus, is to provide a comprehensive analysis of the relationship between two sources of digital data (the web and Twitter) as well as their connection to underlying populations.","University of Canterbury, Christchurch, New Zealand","nlp/corpus-construction, nlp/language-identification","The raw portions of the Common Crawl dataset used to build the corpus are shown in Table 2. The corpus uses every portion of the crawl from March 2014 to June 2019, totaling 147 billion web pages in total. No temporal divisions are included in the corpus because these dates represent the time of collection rather than the time of production: web data does not expire and there is a long-tail in which the same samples are observed multiple times across different periods.","64 monthly crawls: March 2014 (CC-MAIN-2014-10) -- June 2019 (CC-MAIN-2019-29) (WET)","earthlings.io/CGLU","","" "Liang Xu, Xuanwei Zhang, Qianqian Dong – CLUE Organization","CLUECorpus2020: A large-scale Chinese corpus for pre-training language model","https://arxiv.org/abs/2003.01355","papers","20200101Z00:00:00","","","CLUE Organization","nlp/corpus-construction","we introduce the Chinese corpusfrom CLUE organization, CLUECorpus2020, a large-scale corpus that can be used directly for self-supervised learning such as pre-training of a language model, or language gen-eration. It has 100G raw corpus with 35 billion Chinese characters, which is retrieved from Common Crawl¹. [...] We download the corpus from July to December 2019 from Common Crawl. After the aforementioned filtering method, we extract the corpus of 100GB.","July to December 2019 (WARC)","","","" "Andreas Giannakoulopoulos, Minas Pergantis, Nikos Konstantinou, Aristeidis Lamprogeorgos, Laida Limniati, Iraklis Varlamis – Ionian University, Corfu, Greece; Harokopio University of Athens, Athens, Greece","Exploring the Dominance of the English Language on the Websites of EU Countries","http://dx.doi.org/10.3390/fi12040076","papers","20200101Z00:00:00","","The English language is the most dominant language in the Western world and its influence can be noticed in every aspect of human communication. It’s increasing diffusion, especially since the turn of the century, is hard to measure with conventional means. The present research studies the use of language in websites of European Union (EU) member states, in order to collect data about the prevalence of the English language in the different countries and regions of the European Union.To achieve a realistic representation of today’s landscape of the European Web, this study uses avast population of websites and a representative sampling size and methodology. By analyzing and processing the findings from over 100,000 websites from every country in the EU, a solid foundation is set that is used to explore the dominance of the English language in the European World Wide Web in general. This is the first study that examines the presence of English content in the websites of all EU member countries and provides statistical evidence regarding the ratio of English content availability for each country. Conclusively, the results of the research demonstrate that the English language is available on more than one quarter of all websites of non-English speaking EU member states.Moreover, it is available in the vast majority of multilingual and bilingual websites, while at the same time being the only language that is available in a number of monolingual websites. In addition, it is shown preference over the national language in a significant number of cases. A moderate negative correlation is found between a member state’s population and the availability of English in these countries’ websites and the same holds true for a member state’s Gross Domestic Product (GDP).Both these correlations indicate that smaller countries tend to provide more content in English in order to establish a stronger presence in the international environment. Taking into account the role of language in the expression of national identity, this study provides data and insights which may contribute to the discussion about the changes underway in the national identity of EU member states.","Ionian University, Corfu, Greece; Harokopio University of Athens, Athens, Greece","nlp/corpus-construction, web-science, socio-linguistics","The nature of the present research required as many websites as possible, so that both our total population and our sampling pool were as close a representation of reality as possible. For this purpose,we used information obtained from Common Crawl, a “repository of web crawl data that is universally accessible and analyzable” [34]. Among the data Common Crawl offers is an index of every available webpage for all member states of the EU amongst other countries. A process was developed in PHP:Hypertext Preprocessor (PHP) that used the CompounD indeX (CDX) server Application Program Interface (API) [35] to access Common Crawl’s Uniform Resource Locator (URL) index [36] and created a MariaDB database with information about websites from every member state of the EU. Although Common Crawl’s index provides all available crawled pages, our process of data collecting only focused on recording the landing page of one website per domain.","","","","" "Mukund Srinath, Shomir Wilson, C Lee Giles – Pennsylvania State University, PA, USA","Privacy at scale: Introducing the PrivaSeer corpus of web privacy policies","https://arxiv.org/abs/2004.11131","papers","20200101Z00:00:00","","","Pennsylvania State University, PA, USA","nlp/corpus-construction, web-science, internet-security/privacy-policies","We used Common Crawl² to gather seed URLs to crawl for privacy policies from the web, as we describe in detail below. We filtered the Common Crawl URLs to get a set of possible links to web site privacy policies. We then crawled the filtered set to obtain candidate privacy policy documents. The complete pipeline from the Common Crawl URL dump to the gold standard privacy policy corpus is shown in Figure 1. [...] The Common Crawl Foundation is a non-profit which has been releasing large monthly internet web crawls since 2008. Monthly crawl archives provide a “snapshot of the web” by including re-crawls of popular domains (re-crawls from previous archives) and crawls of new domains. Common Crawl has also been releasing a domain-level webgraph from which the harmonic centrality of the crawled domains are calculated. This webgraph his used to sample popular domains that need to be re-crawled and to obtain new uncrawled domains. We downloaded the URL dump of the May, 2019 archive. Common Crawl reports that the archive contains 2.65 billion web pages or 220 TB of uncompressed content which were crawled between 19th and 27th of May, 2019. They also report that this archive contains 825 million URLs which were not contained in any previously released crawl archives. We applied a selection criteria on the downloaded URL dump to filter the URLs of likely privacy policy pages.","","","","" "Tianxi Dong, Jason Triche – Trinity University, San Antonio, TX, USA; University of Montana, MT, USA","A longitudinal analysis of job skills for entry-level data analysts","https://jise.org/Volume31/n4/JISEv31n4p312.pdf","papers","20200101Z00:00:00","","","Trinity University, San Antonio, TX, USA; University of Montana, MT, USA","business-intelligence, nlp/corpus-construction","Our first challenge was how to collect job postings over past years because job websites do not keep historical data for more than one year. Therefore, we used the Common Crawl dataset to address this problem (http://commoncrawl.org/). Common Crawl is a non-profit organization that builds and maintains an open repository of web crawl data that is, in essence, a copy of the Internet. Common Crawl data contains over 25 billion web pages (Batikas, Claussen, and Peukert, 2018) and is widely used in hundreds of research projects (Batikas, Claussen, and Peukert, 2018; Cafarella et al., 2018). Since we were only interested in the content from Indeed.com, we only examined a very small fraction of the Common Crawl corpus.","","","","" "Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, Colin Raffel – Google; Stanford University; UC Berkeley; Northeastern University; OpenAI; Harvard University; Apple","Extracting training data from large language models","https://arxiv.org/abs/2012.07805","papers","20200101Z00:00:00","","","Google; Stanford University; UC Berkeley; Northeastern University; OpenAI; Harvard University; Apple","ai/ethical-concerns, nlp/language-models","We follow a different data collection process as used in GPT-2 (which follows Reddit links) in order to reduce the likelihood that our dataset has any intersection with the model’s training data. In particular, we select samples from a subset of Common Crawl⁶ [⁶http://commoncrawl.org/] to feed as context to the model.⁷ [⁷It is possible there is some intersection between these two datasets, effectively allowing this strategy to “cheat”. We believe this does not considerably affect results. First, any overlap between the two datasets is rare on average. Second, because we only use the first 5 or 10 tokens of each sample, any possible overlap will be small in absolute terms.]","","","","" "Thaer Sammar, Hadi Khalilia – Palestine Technical University, Tulkarm, West Bank","Going Back in Time to Find What Existed on the Web and How much has been Preserved: How much of Palestinian Web has been Archived?","http://proceedings.sriweb.org/akn/index.php/art/article/view/410","papers","20200101Z00:00:00","","The web is an important resource for publishing and sharing content. The main characteristic of the web is its volatility. Content is added, updated, and deleted all the time. Therefore, many national and international institutes started crawling and archiving the content of the web. The main focus of national institutes is to archive the web related to their country heritage, for example, the National Library of the Netherlands is focusing on archiving website that are of value to the Dutch heritage. However, there are still countries that haven’t taken the action to archive their web, which will result in loosing and having a gap in the knowledge. In this research, we focus on shedding the light on the Palestinian web. Precisely, how much of the Palestinian web has been archived. First, we create a list of Palestinian hosts that were on the web. For that we queried Google index exploiting the time range filter in order to get hosts overtime. We collected in 98 hosts in average in 5-years granularity from the year 1990 to 2019. We also obtained Palestinian hosts from the DMOZ directory. We collected 188 hosts. Second, we investigate the coverage of collected hosts in the Internet Archive and the Common-Crawl. We found that coverage of Google hosts in the Internet Archive ranges from 0\% to 89\% from oldest to newest time-granularity. The coverage of DMOZ hosts was 96\%. The coverage of Google hosts in the Common-Crawl 57.1\% to 74.3, while the coverage of DMOZ hosts in the Common-Crawl was in average 25\% in all crawls. We found that even the host is covered in Internet Archive and Common-Crawl, the lifespan and the number of archived versions are low.","Palestine Technical University, Tulkarm, West Bank","web-archiving/regional-coverage","","CDX index","","","" "Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, Noah A. Smith – Paul G. Allen School of Computer Science & Engineering, University of Washington, USA; Allen Institute for Artificial Intelligence, Seattle, USA","RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models","https://arxiv.org/abs/2009.11462","papers","20200101Z00:00:00","","","Paul G. Allen School of Computer Science & Engineering, University of Washington, USA; Allen Institute for Artificial Intelligence, Seattle, USA","no-citation-misclassified, ai/ethics-of-machine-learning, ai/machine-learning, nlp/language-model","","","","","" "Xinyue Wang, Zhiwu Xie – Virginia Polytechnic Institute and State University, Blacksburg, VA, USA","The Case For Alternative Web Archival Formats To Expedite The Data-To-Insight Cycle","https://doi.org/10.1145/3383583.3398542","papers","20200101Z00:00:00","storage management, big data analysis, web archiving, file format","The WARC file format is widely used by web archives to preserve collected web content for future use. With the rapid growth of web archives and the increasing interest to reuse these archives as big data sources for statistical and analytical research, the speed to turn these data into insights becomes critical. In this paper we show that the WARC format carries significant performance penalties for batch processing workload. We trace the root cause of these penalties to its data structure, encoding, and addressing method. We then run controlled experiments to illustrate how severe these problems can be. Indeed, performance gain of one to two orders of magnitude can be achieved simply by reformatting WARC files into Parquet or Avro formats. While these results do not necessarily constitute an endorsement for Avro or Parquet, the time has come for the web archiving community to consider replacing WARC with more efficient web archival formats.","Virginia Polytechnic Institute and State University, Blacksburg, VA, USA","web-archiving, data formats, big data, data processing, WARC, Parquet","","","","","" "Srdjan Matic, Costas Iordanou, Georgios Smaragdakis, Nikolaos Laoutaris – TU Berlin, Germany; Cyprus University of Technology, Cyprus; IMDEA Networks Institute","Identifying Sensitive URLs at Web-Scale","https://do.tu-berlin.de/handle/11303/13215","papers","20200101Z00:00:00","","Several data protection laws include special provisions for protecting personal data relating to religion, health, sexual orientation, and other sensitive categories. Having a well-defined list of sensitive categories is sufficient for filing complaints manually, conducting investigations, and prosecuting cases in courts of law. Data protection laws, however, do not define explicitly what type of content falls under each sensitive category. Therefore, it is unclear how to implement proactive measures such as informing users, blocking trackers, and filing complaints automatically when users visit sensitive domains. To empower such use cases we turn to the Curlie.org crowdsourced taxonomy project for drawing training data to build a text classifier for sensitive URLs. We demonstrate that our classifier can identify sensitive URLs with accuracy above 88%, and even recognize specific sensitive categories with accuracy above 90%. We then use our classifier to search for sensitive URLs in a corpus of 1 Billion URLs collected by the Common Crawl project. We identify more than 155 millions sensitive URLs in more than 4 million domains. Despite their sensitive nature, more than 30% of these URLs belong to domains that fail to use HTTPS. Also, in sensitive web pages with third-party cookies, 87% of the third-parties set at least one persistent cookie.","TU Berlin, Germany; Cyprus University of Technology, Cyprus; IMDEA Networks Institute","computer-security/internet-security, privacy, GDPR, general data protection regulation","When it comes to detecting specific sensitive categories, such as those defined by GDPR: Health, Politics, Religion, Sexual Orientation, Ethnicity, our classifier achieves a high classification accuracy as well. For specific categories, such as Health (98%), Politics (92%), Religion (97%), our classifier achieves an accuracy that exceeds the basic classification accuracy between sensitive and non-sensitive URLs (88%).¶ • Applying our classifier on a Common Crawl snapshot of the English speaking Web (around 1 Billion URLs), we identify 155 million sensitive URLs in more than 4 million domains. Health, Religion, and Political Beliefs are the most popular categories with around 70 millions, 35 millions, and 32 millions URLs respectively.¶ • Looking among the identified sensitive URLs we reach the conclusion that sensitive URLs are handled as any other URL, without any special provision for the privacy of users. For example, we show that 30% of sensitive URLs are hosted in domains that fail to use HTTPS. Also, in sensitive web pages with third-party cookies, 87% of the third-parties sets at least one persistent cookie.","","","","" "Sebastian Nagel – Common Crawl","Experiments using a Distributed Web Crawler to Process and Index Web Archives","https://doi.org/10.5281/zenodo.4609371","papers","20200101Z00:00:00","","","Common Crawl","web crawling, web archiving","","","","","" "Sebastian Roth, Timothy Barron, Stefano Calzavara, Nick Nikiforakis, Ben Stock – CISPA Helmholtz Center for Information Security, Germany; Stony Brook University, USA; Università Ca’ Foscari, Venezia, Italy","Complex security policy? a longitudinal analysis of deployed content security policies","https://par.nsf.gov/biblio/10173479","papers","20200101Z00:00:00","","The Content Security Policy (CSP) mechanism was developed as a mitigation against script injection attacks in 2010. In this paper, we leverage the unique vantage point of the Internet Archive to conduct a historical and longitudinal analysis of how CSP deployment has evolved for a set of 10,000 highly ranked domains. In doing so, we document the long- term struggle site operators face when trying to roll out CSP for content restriction and highlight that even seemingly secure whitelists can be bypassed through expired or typo domains. Next to these new insights, we also shed light on the usage of CSP for other use cases, in particular, TLS enforcement and framing control. Here, we find that CSP can be easily deployed to fit those security scenarios, but both lack wide-spread adoption. Specifically, while the underspecified and thus inconsistently implemented X-Frame-Options header is increasingly used on the Web, CSP’s well-specified and secure alternative cannot keep up. To understand the reasons behind this, we run a notification campaign and subsequent survey, concluding that operators have often experienced the complexity of CSP (and given up), utterly unaware of the easy-to-deploy components of CSP. Hence, we find the complexity of secure, yet functional content restriction gives CSP a bad reputation, resulting in operators not leveraging its potential to secure a site against the non-original attack vectors.","CISPA Helmholtz Center for Information Security, Germany; Stony Brook University, USA; Università Ca’ Foscari, Venezia, Italy","computer-security/internet-security, web-science","To determine this IA-specific influence, we chose a second archive service to corroborate the IA’s data. In particular, Common Crawl (CC) [10] has been collecting snapshots of popular sites since 2013. For each date on which we found a CSP in the IA, we queried the CC API for a matching snapshot. Overall, we found 38,129 overlapping snapshots for 940 sites. Out of these, 729 (1.9%) on 127 sites were inconsistent between the two archives. For 96 cases the difference was the lack of block-all-mixed-content or upgrade-insecure-requests in the CC data. Further investigation showed that in the IA, these directives were separated from the remaining CSP with a comma instead of a semicolon. This likely relates to the IA joining headers with the same name with a comma. For those pages, we could always only find a single CSP header in the CC response. Moreover, starting from August 2018, these sites still used the aforementioned directives in the IA data, but CC returned two CSP headers (one including only those directives). Hence, we speculate this relates to a bug in CC, which was fixed around August 2018.","","","","" "Frankie Robertson, Jarkko Lagus, Kaisla Kajava – University of Jyväskylä, Finland; University of Helsinki, Finland","A COVID-19 news coverage mood map of Europe","https://www.aclweb.org/anthology/2021.hackashop-1.15","papers","20210101Z00:00:00","","We present a COVID-19 news dashboard which visualizes sentiment in pandemic news coverage in different languages across Europe. The dashboard shows analyses for positive/neutral/negative sentiment and moral sentiment for news articles across countries and languages. First we extract news articles from news-crawl. Then we use a pre-trained multilingual BERT model for sentiment analysis of news article headlines and a dictionary and word vectors -based method for moral sentiment analysis of news articles. The resulting dashboard gives a unified overview of news events on COVID-19 news overall sentiment, and the region and language of publication from the period starting from the beginning of January 2020 to the end of January 2021.","University of Jyväskylä, Finland; University of Helsinki, Finland","nlp/corpus-construction, nlp/sentiment-analysis","","CC-NEWS","","","" "Jesse Dodge, Maarten Sap, Ana Marasovic, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Matt Gardner – Paul G. Allen School of Computer Science & Engineering, University of Washington, USA; Allen Institute for Artificial Intelligence, USA","Documenting large webtext corpora: a case study on the Colossal Clean Crawled Corpus","https://arxiv.org/abs/2104.08758","papers","20210101Z00:00:00","","Large language models have led to remarkable progress on many NLP tasks, and researchers are turning to ever-larger text corpora to train them. Some of the largest corpora available are made by scraping significant portions of the internet, and are frequently introduced with only minimal documentation. In this work we provide some of the first documentation for the Colossal Clean Crawled Corpus (C4; Raffel et al., 2020), a dataset created by applying a set of filters to a single snapshot of Common Crawl. We begin by investigating where the data came from, and find a significant amount of text from unexpected sources like patents and US military websites. Then we explore the content of the text itself, and find machine-generated text (e.g., from machine translation systems) and evaluation examples from other benchmark NLP datasets. To understand the impact of the filters applied to create this dataset, we evaluate the text that was removed, and show that blocklist filtering disproportionately removes text from and about minority individuals. Finally, we conclude with some recommendations for how to created and document web-scale datasets from a scrape of the internet.","Paul G. Allen School of Computer Science & Engineering, University of Washington, USA; Allen Institute for Artificial Intelligence, USA","nlp/corpus-construction, nlp/language-model","","CC-MAIN-2019-18 (WET)","Tensorflow-C4, Huggingface-Allenai-C4-English","","" "Isaac Caswell, Julia Kreutzer, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera, Annette Rios, Isabel Papadimitriou, Salomey Osei, Pedro Javier Ortiz Suárez, Iroro Orife, Kelechi Ogueji, Rubungo Andre Niyongabo, Toan Q. Nguyen, Mathias Müller, André Müller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine Çabuk Ballı, Stella Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, Mofetoluwa Adeyemi – Google Research; Masakhane NLP; Turkic Interlingua; Haverford College; RobotsMali; Intel Labs; University of Zambia; Google; AIMS-AMMI; Inria; University of Zurich; Stanford University; Kwame Nkrumah University of Science and Technology; Sorbonne Université; Niger-Volta LTI; University of Waterloo; University of Electronic Science and Technology of China; University of Notre Dame; Bayero University Kano; University of South Florida; Hugging Face; Jacobs University Bremen; University of Moratuwa; EleutherAI; Obafemi Awolowo University; University of Ibadan; Instadeep; University of Maryland; Defence Space Administration Abuja","Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets","https://arxiv.org/abs/2103.12028","papers","20210101Z00:00:00","","With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, web-mined text datasets covering hundreds of languages. However, to date there has been no systematic analysis of the quality of these publicly available datasets, or whether the datasets actually contain content in the languages they claim to represent. In this work, we manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4), and audit the correctness of language codes in a sixth (JW300). We find that lower-resource corpora have systematic issues: at least 15 corpora are completely erroneous, and a significant fraction contains less than 50\% sentences of acceptable quality. Similarly, we find 82 corpora that are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-speakers of the languages in question, and supplement the human judgements with automatic analyses. Inspired by our analysis, we recommend techniques to evaluate and improve multilingual corpora and discuss the risks that come with low-quality data releases.","Google Research; Masakhane NLP; Turkic Interlingua; Haverford College; RobotsMali; Intel Labs; University of Zambia; Google; AIMS-AMMI; Inria; University of Zurich; Stanford University; Kwame Nkrumah University of Science and Technology; Sorbonne Université; Niger-Volta LTI; University of Waterloo; University of Electronic Science and Technology of China; University of Notre Dame; Bayero University Kano; University of South Florida; Hugging Face; Jacobs University Bremen; University of Moratuwa; EleutherAI; Obafemi Awolowo University; University of Ibadan; Instadeep; University of Maryland; Defence Space Administration Abuja","nlp/corpus-construction, nlp/web-as-corpus, nlp/parallel-corpus, nlp/low-resource-language","We selected the corpora for their multilinguality and the inclusion of understudied languages in NLP. With the exception of WikiMatrix and Paracrawl, all corpora are derived from CommonCrawl, and distinguish themselves by the choice of filtering methods, LangID and automatic alignment technology.","","CCAligned-2020, Tensorflow-C4-Multilingual, OSCAR","","" "P. Kalaharsha, B. M. Mehtre – Institute for Development and Research in Banking Technology (IDRBT), Hyderabad, Indiab; School of Computer Science and Information Sciences (SCIS), University of Hyderabad, Hyderabad, India","Detecting Phishing Sites -- An Overview","https://arxiv.org/abs/2103.12739","papers","20210101Z00:00:00","","","Institute for Development and Research in Banking Technology (IDRBT), Hyderabad, Indiab; School of Computer Science and Information Sciences (SCIS), University of Hyderabad, Hyderabad, India","computer-security/internet-security, computer-security/malicious-domain-detection","Alexa and Common crawl contains names of the legitimate sites which are likely to be used for phishing [62][63]. [63:http://index.commoncrawl.org]","","","","" "Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel – Google Research","mT5: A massively multilingual pre-trained text-to-text transformer","https://arxiv.org/abs/2010.11934","papers","20210101Z00:00:00","","","Google Research","nlp/corpus-construction, nlp/web-as-corpus, nlp/language-model","[...] we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages.","CC-MAIN-2019-18 (WET)","Tensorflow-C4-Multilingual (mC4)","","" "Bilal Tahir, Muhammad Amir Mehmood – University of Engineering and Technology, Lahore, Pakistan","Corpulyzer: A Novel Framework for Building Low Resource Language Corpora","https://ieeexplore.ieee.org/document/9316706","papers","20210101Z00:00:00","","","University of Engineering and Technology, Lahore, Pakistan","nlp/corpus-construction, nlp/web-as-corpus, nlp/low-resource-language","Leveraging dataset from Common Crawl Corpus (CCC), first, we prepare a list of seed URLs by filtering the Urdu language webpages. Next, we use Corpulyzer to crawl the World-Wide-Web (WWW) over a period of four years (2016-2020). We build Urdu web corpus “UrduWeb20” that consists of 8.0 million Urdu webpages crawled from 6,590 websites. [...] building a corpus of a low-resource language from CCC is a challenging task due to: i) sampling techniques, ii) filtering of webpages of target languages, and iii) full parsing of CCC. [...] we build upon our previous approach [40] where we developed a dataset consisting of 1.28 million Urdu webpages from CCC 2016 dataset. [...] In general, CCC release meta-data as well as the crawled content where former is lightweight and easier to analyze and latter requires huge bandwidth to download and store the data. As an alternate strategy, we build three datasets using CC released data: i) CC-meta, ii) CC-Urdu-meta, and ii) CC-Urdu-crawl. First, we build CC-meta dataset to explore the impact of URL selection and crawling strategies of Common Crawl in general. This dataset consists of meta-information of 29.1 billion URLs in 11 common crawl releases from September2018 – June2019. This meta-information of each release is available in the form of compressed files (>200GB size) with information of webpage URL, MIME-type, and charset etc [94]. Next, we build CC-Urdu-meta dataset by filtering out Urdu webpages. We note that from August 2018 onward releases [95], CC also provides ISO6 language code of top three languages present in webpages after parsing HTML of the webpage from CLD2.","","","","" "Alexandra Sasha Luccioni, Joseph D. Viviano – Université de Montréal, Canada; Mila Québec AI Institute, Canada","What's in the Box? An Analysis of Undesirable Content in the Common Crawl Corpus","https://arxiv.org/abs/2105.02732","papers","20210101Z00:00:00","","","Université de Montréal, Canada; Mila Québec AI Institute, Canada","ai/ethics-of-machine-learning, nlp/corpus-construction, nlp/text-corpora","Given its size, both downloading and analyzing the Common Crawl are time-consuming and costly endeavors. The most recent version of the Common Crawl [https://commoncrawl.org/2020/12/nov-dec-2020-crawl-archive-now-available/], dating from November/December 2020, has 2.6 billion web pages in raw text format, saved in ‘shards’ each containing of tens of thousands of pages. Given our hardware constraints, we chose to focus on a subset of the corpus, randomly sampling 1% of the files it contains, roughly amounting toroughly 81 GB of textual content or 5,835,339 webpages in total, which we analyzed in terms of hate speech, adult content, and efficacy of perplexity-based filtering. All code used in these analysis are publicly available¹ [¹https://github.com/josephdviviano/whatsinthebox]. [...] We found that the three approaches compared suggest similar proportions of websites containing hate speech: 5.24% of websites from our sample were flagged by DELIMIT, 4.02% by HateSonar,and 6.38% by the n-gram approach². [²We are conscious of the high false positive rate of n-gram approaches and therefore only consider sites to be flagged if they contain 3 or more n-grams from the list.] Qualitative analysis of a sample of sites flagged by each approach showed that while n-grams picked up on racial slurs, HateSonar picked up on debates about racial supremacy and conspiracy theories. Many of the sites that DELIMIT flagged were adult content with mentions of violent acts towards specific ethnic groups, illustrating the fine line between sexual violence and hate speech. [...] While it can be argued that the Common Crawl corpus is an accurate portrayal of the discourse of modern society – which includes sexual content, hate speech, racial biases, and gender biases – we believe that it is up for debate whether this discourse is the one that we, as a community, want to use to train the models that translate our texts, influence our search results and answer our questions. Notably, the Common Crawl overrepresents those populations that are avid users of the internet: younger, English-speaking individuals from developed countries, [...]","","","","" "Maik Fröbe, Janek Bevendorff, Lukas Gienapp, Michael Völske, Benno Stein, Martin Potthast, Matthias Hagen – Martin-Luther-Universität Halle-Wittenberg, Germany; Bauhaus-Universität Weimar, Germany; Leipzig University, Germany","CopyCat: Near-Duplicates within and between the ClueWeb and the Common Crawl","https://dl.acm.org/doi/10.1145/3404835.3463246","papers","20210101Z00:00:00","","","Martin-Luther-Universität Halle-Wittenberg, Germany; Bauhaus-Universität Weimar, Germany; Leipzig University, Germany","ir/duplicate-detection","","CC-MAIN-2015-11, CC-MAIN-2017-04","","","" "Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy – EleutherAI","The Pile: An 800GB Dataset of Diverse Text for Language Modeling","https://arxiv.org/abs/2101.00027","papers","20210101Z00:00:00","","Recent work has demonstrated that increased training dataset diversity improves general cross-domain knowledge and downstream generalization capability for large-scale language models. With this in mind, we present the Pile: an 825 GiB English text corpus targeted at training large-scale language models. The Pile is constructed from 22 diverse high-quality subsets—both existing and newly constructed—many of which derive from academic or professional sources. Our evaluation of the untuned performance of GPT-2 and GPT-3 on the Pile shows that these models struggle on many of its components, such as academic writing. Conversely, models trained on the Pile improve significantly over both Raw CC and CC-100 on all components of the Pile, while improving performance on downstream evaluations. Through an in-depth exploratory analysis, we document potentially concerning aspects of the data for prospective users. We make publicly available the code used in its construction.¹ [¹https://pile.eleuther.ai/]","EleutherAI","nlp/corpus-construction, nlp/text-corpora, nlp/language-model, nlp/text-corpora/legal-aspects","The growing need for data in language modeling has caused most existing large-scale language models to turn to the Common Crawl for most or all of their data (Brown et al., 2020; Raffel et al., 2019). While training on the Common Crawl has been effective, recent work has shown that dataset diversity leads to better downstream generalization capability (Rosset, 2019). [...] we also introduce a new filtered subset of Common Crawl, Pile-CC, with improved extraction quality. [...] 2.1 Pile-CC Common Crawl is a collection of website crawls from 2008 onwards, including raw web pages, metadata and text extractions. Due to the raw nature of the dataset, Common Crawl has the advantage of including text from diverse domains, but at the cost of varying quality data. Due to this, use of Common Crawl typically necessitates well-designed extraction and filtering. Our Common Crawl-based dataset, Pile-CC, uses jusText (Endrédy and Novák, 2013) on Web Archive files (raw HTTP responses including page HTML) for extraction, which yields higher quality output than directly using the WET files (extracted plain-text). [...] Surprisingly, raw Common Crawl performs better on the Pile BPB than CC-100, despite losing by a significant margin on LAMBADA and WikiText. We hypothesize that this is due to the perplexity based filtering used in CC-100, where a language model is trained on Wikipedia and all data with a perplexity too high or too low is discarded. This effectively discards any data too similar to or too different from Wikipedia, which severely limits the diversity of the collected data. This result suggests that future work using Common Crawl should take caution with filtering to preserve its diversity.","69 monthly crawls (WARC): CC-MAIN-2013-20 - CC-MAIN-2020-24, cf. https://github.com/leogao2/commoncrawl_downloader/blob/3a7a4a7c33aaee2a45f320f7bc57d0dcd3f3a220/indexes_20200607105929","The-Pile-English","","" "Leon Derczynski, Manuel R. Ciosici, Rebekah Baglini, Morten H. Christiansen, Jacob Aarup Dalsgaard, Riccardo Fusaroli, Peter Juel Henrichsen, Rasmus Hvingelby, Andreas Kirkedal, Alex Speed Kjeldsen, Claus Ladefoged, Finn Årup Nielsen, Jens Madsen, Malte Lau Petersen, Jonathan Hvithamar Rystrøm, Daniel Varab – ITU Copenhagen, Denmark; Aarhus University, Denmark; Danish Language Council, Denmark; TV2 Regionerne, Denmark; Karnov Group, Denmark; USC Information Sciences Institute, USA; Alexandra Institute, Denmark; University of Copenhagen, Denmark; Technical University of Denmark; Novo Nordisk, Denmark","The Danish Gigaword Corpus","https://gigaword.dk/","papers","20210101Z00:00:00","","","ITU Copenhagen, Denmark; Aarhus University, Denmark; Danish Language Council, Denmark; TV2 Regionerne, Denmark; Karnov Group, Denmark; USC Information Sciences Institute, USA; Alexandra Institute, Denmark; University of Copenhagen, Denmark; Technical University of Denmark; Novo Nordisk, Denmark","nlp/corpus-construction, nlp/text-corpora","[...] the Danish section of Common Crawlis plagued by significant amounts of non-Danish content, in part due to the pervasive confusion between Danish and Norwegian Bokmål by highly multilingual language ID classifiers (Haas and Derczynski, 2021). Datasets derived exclusively from Common Crawl also have a bias toward webspeak and content from recent years, leaving models built over them sub-ptimally prepared to process older Danish. Common Crawl’s undirected collection of content often overrepresents some dialects at the expense of other dialects.","","","","" "Patrick Dinklage, Jonas Ellert, Johannes Fischer, Florian Kurpicz, Marvin Löbel – TU Dortmund University, Germany","Practical Wavelet Tree Construction","https://doi.org/10.1145/3457197","papers","20210101Z00:00:00","text indexing, shared memory, external memory, distributed memory, data structures","We present new sequential and parallel algorithms for wavelet tree construction based on a new bottom-up technique. This technique makes use of the structure of the wavelet trees—refining the characters represented in a node of the tree with increasing depth—in an opposite way, by first computing the leaves (most refined), and then propagating this information upwards to the root of the tree. We first describe new sequential algorithms, both in RAM and external memory. Based on these results, we adapt these algorithms to parallel computers, where we address both shared memory and distributed memory settings.In practice, all our algorithms outperform previous ones in both time and memory efficiency, because we can compute all auxiliary information solely based on the information we obtained from computing the leaves. Most of our algorithms are also adapted to the wavelet matrix, a variant that is particularly suited for large alphabets.","TU Dortmund University, Germany","data-structures, text-indexing","Common Crawl. The Common Crawl corpus contains websites that are crawled by the Common Crawl Project. We use the WET files, which contain only the textual data of the crawled websites, i. e., no HTML tags. We also removed the meta information added by the Commoncrawl corpus. To be more precise, we used the following WET files: crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/wet/CC-MAIN-20190215183319-20190215205319-#ID.warc.wet, where #ID is in the range from 00000 to 00600. As we only care for the text, we removed the WARC meta information, i. e., each line consisting of WARC/1.0 and the following eight lines. CommonCrawl is the concatenation of all files sorted in ascending order by their ID.","CC-MAIN-2019-09 (600 WET files)","","","" "Jay A. Olson, Johnny Nahas, Denis Chmoulevitch, Simon J. Cropper, Margaret E. Webb – Department of Psychology, Harvard University, Cambridge, MA, USA; Department of Psychology, McGill University, Montreal, QC, Canada; Melbourne School of Psychological Sciences, University of Melbourne, Australia","Naming unrelated words predicts creativity","https://www.pnas.org/content/118/25/e2022340118","papers","20210101Z00:00:00","","Many traditional measures of creativity require time-intensive and subjective scoring procedures. Their scores are relative to the specific sample, which makes multicultural or international assessments difficult. Our results show that a shorter and simpler task with automatic and objective scoring may be at least as reliable at measuring verbal creativity. This finding enables assessments across larger and more diverse samples with less bias.Several theories posit that creative people are able to generate more divergent ideas. If this is correct, simply naming unrelated words and then measuring the semantic distance between them could serve as an objective measure of divergent thinking. To test this hypothesis, we asked 8,914 participants to name 10 words that are as different from each other as possible. A computational algorithm then estimated the average semantic distance between the words; related words (e.g., cat and dog) have shorter distances than unrelated ones (e.g., cat and thimble). We predicted that people producing greater semantic distances would also score higher on traditional creativity measures. In Study 1, we found moderate to strong correlations between semantic distance and two widely used creativity measures (the Alternative Uses Task and the Bridge-the-Associative-Gap Task). In Study 2, with participants from 98 countries, semantic distances varied only slightly by basic demographic variables. There was also a positive correlation between semantic distance and performance on a range of problems known to predict creativity. Overall, semantic distance correlated at least as strongly with established creativity measures as those measures did with each other. Naming unrelated words in what we call the Divergent Association Task can thus serve as a brief, reliable, and objective measure of divergent thinking.The data and algorithm code have been deposited in the Open Science Framework (https://osf.io/vjazn/).","Department of Psychology, Harvard University, Cambridge, MA, USA; Department of Psychology, McGill University, Montreal, QC, Canada; Melbourne School of Psychological Sciences, University of Melbourne, Australia","psychology/creativity, psychology/computational-scoring, nlp/word-embeddings","We chose the GloVe algorithm and the Common Crawl corpus [...]","","","GloVe-word-embeddings","" "Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer – Facebook AI; University of Washington, USA","HTLM: Hyper-Text Pre-Training and Prompting of Language Models","https://arxiv.org/abs/2107.06955","papers","20210101Z00:00:00","","We introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advan- tages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-task- adjacent supervision (e.g. class and id at- tributes often encode document category information), and (3) it allows for new structured prompting that follows the established seman- tics of HTML (e.g. to do zero-shot summarization by infilling tags for a webpage that contains the input text). We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels. HTLM matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and fine-tuning for classification benchmarks, while also setting new state-of-the-art performance levels for zero-shot summarization. We also find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs, and that HTLM is highly effective at auto- prompting itself, by simply generating the most likely hypertext formatting for any available training data. We will release all code and models to support future HTLM research.","Facebook AI; University of Washington, USA","nlp/corpus-construction, nlp/text-corpora, nlp/transformer-language-model","Our HyperTextLanguageModel (HTLM) is trained on 23TB of simplified HTML which we automatically extract from common crawl dumps [...]","","","","" "Julien Abadji, Pedro Javier Ortiz Suárez, Laurent Romary, Benoît Sagot – Inria, Paris, France; Sorbonne Université, Paris, France","Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus","https://ids-pub.bsz-bw.de/frontdoor/deliver/index/docId/10468/file/Abadji_Suarez_Romary_Ungoliant_2021.pdf","papers","20210101Z00:00:00","","Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.","Inria, Paris, France; Sorbonne Université, Paris, France","nlp/corpus-construction, nlp/text-corpora","","","","","" "Guy Grossman, Stephanie Zonszein – University of Pennsylvania, USA","Voted In, Standing Out: Public Response to Immigrants' Political Accession","https://osf.io/xd4wk/","papers","20210101Z00:00:00","","In a context of nativism and poor representation of immigrant-origin ethnic minori- ties, what is the reaction of the host society when immigrants succeed at integration in political institutions? Building on threat theory—which links minorities’ political power to hostility against minoritized groups—we argue that when they win political office, immigrants pose a threat to natives’ dominant position. This in turn triggers a hostile reaction from a violent-prone fringe, the mass public and the elites. We test these dynamics across the last four UK general elections, using hate crime police records, public opinion data, and text data from over 500,000 news articles from 350 na- tional and local newspapers. We identify the public’s hostile reactions with a regression discontinuity design that leverages close election results between minority-immigrant and dominant group candidates. Our findings suggest a public backlash against ethnic minority immigrants’ integration into majority settings.","University of Pennsylvania, USA","political science, sociology, political integration of immigrants, ethnic minorities","News articles were extracted from Common Crawl, ethnic background of candidates is constructed by the authors, and constituency characteristics from 2001 and 2011 UK Decennial Census. [...] Then, to obtain the articles published by each of these newspapers, we looked up the URLs in Common Crawl (an open repository of web crawl data containing a snapshot of every web page at the moment of the crawl). Particularly in the Index for 2020-16 crawl, the most recent crawl at that moment. We retrieved the WARC (Web ARChive format) records for each crawled page from the newspaper, and extracted the pages’ HTML. From the HTML, we extracted the text, title, and byline using the Python package readabiliPy; the publication date using the Python library htmldate; the location by tokenizing the article with CoreNLP, and looking for tokens which match place names in the Index of Place Names in Great Britain, and mapping to the corresponding constituency. Figure D.1 presents the geographical coverage of all extracted articles across constituencies.¶ [...] 4.3 Media tone toward migrant groups¶ Data We use data from over 500,000 articles from 350 national, regional and local UK newspapers, covering the general elections from 2010–2019.⁸ This data is from Common Crawl, which is an open repository of web crawl data. We assume that an article refers to a candidate’s ethnic group when three conditions are met: 1) the publication date is on election day and up to 10 months after each general election⁹, 2) the article contains mentions of terms referring to the candidate’s country or nationality of origin, which are extracted with the named entity annotator of CoreNLP and 3) such mentions co-occur in the article with a mention referring to the candidate’s constituency. The constituency is extracted by tokenizing the article with CoreNLP and looking for tokens which match place names in the Index of Place Names in Great Britain, and mapping to the corresponding constituency. Overall, this data includes almost 150,000 mentions from 156 newspapers that meet these three conditions about the candidates’ group. [...] D Newspaper data, computation of media tone measures and validation of key elements Newspaper data We construct the dataset of newspaper articles using the following steps. To determine a comprehensive list of UK newspapers, we first identified a list of seed categories on Wikipedia (WP) (e.g. ’Category:Newspapers_published_in_England’), we took the recursive items of those categories (e.g. ’Category:Newspapers_published_in_England’ > ’Category:Newspapers_published_in_London’), we used WP article properties to filter out articles about non-newspapers (e.g. people, books), and we extracted the newspaper URLs from the WP Infobox using the Python package wptools. With this process we identified a list of UK newspapers URLs containing 337 newspapers in total. Then, to obtain the articles published by each of these newspapers, we looked up the URLs in Common Crawl (an open repository of web crawl data containing a snapshot of every web page at the moment of the crawl). Particularly in the Index for 2020-16 crawl, the most recent crawl at that moment. We retrieved the WARC (Web ARChive format) records for each crawled page from the newspaper, and extracted the pages’ HTML. From the HTML, we extracted the text, title, and byline using the Python package readabiliPy; the publication date using the Python library htmldate; the location by tokenizing the article with CoreNLP, and looking for tokens which match place names in the Index of Place Names in Great Britain, and mapping to the corresponding constituency. Figure D.1 presents the geographical coverage of all extracted articles across constituencies.","CC-MAIN-2020-16","","","" "Helen Ngo, João G. M. Araújo, Jeffrey Hui, Nicholas Frosst – Cohere, Toronto, Canada","No News is Good News: A Critique of the One Billion Word Benchmark","https://arxiv.org/abs/2110.12609","papers","20210101Z00:00:00","","The One Billion Word Benchmark is a dataset derived from the WMT 2011 News Crawl, commonly used to measure language modeling ability in natural language processing. We train models solely on Common Crawl web scrapes partitioned by year, and demonstrate that they perform worse on this task over time due to distributional shift. Analysis of this corpus reveals that it contains several examples of harmful text, as well as outdated references to current events. We suggest that the temporal nature of news and its distribution shift over time makes it poorly suited for measuring language modeling ability, and discuss potential impact and considerations for researchers building language models and evaluation datasets.","Cohere, Toronto, Canada","nlp/language-model, nlp/language-model/perplexity","Common Crawl is a repository of web scrapes of the internet updated annually and is often used as a key data source for language models built on the open web [8, 2, 1]. We train benchmark models on three distinct datasets created by selecting data sampled from different years of Common Crawl: 2013 (the year which lm1b was released), 2016, and 2020. [...] Models which are trained on datasets temporally further removed from the lm1b corpus source (i.e. WMT 2011 News Crawl dataset) exhibit higher perplexity than those trained on datasets which are temporally closer.","","","","" "Leo Gao – EleutherAI","An Empirical Exploration in Quality Filtering of Text Data","https://arxiv.org/abs/2109.00698","papers","20210101Z00:00:00","","While conventional wisdom suggests that more aggressively filtering data from low-quality sources like Common Crawl always monotonically improves the quality of training data, we find that aggressive filtering can in fact lead to a decrease in model quality on a wide array of downstream tasks for a GPT-like language model. We speculate that this is because optimizing sufficiently strongly for a proxy metric harms performance on the true objective, suggesting a need for more robust filtering objectives when attempting to filter more aggressively. We hope this work leads to detailed analysis of the effects of dataset filtering design choices on downstream model performance in future work.","EleutherAI","nlp/language-model, nlp/corpus-construction","The recent proliferation of ever larger language models has led to increasing demands on training data (Radford et al., 2018, 2019; Gokaslan and Cohen, 2019; Rosset, 2019; Shoeybi et al., 2019; Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2020; Brown et al., 2020; Zeng et al., 2021). This data is increasingly derived from internet corpora like Common Crawl (Radford et al., 2019; Ortiz Suárez et al., 2019; Wenzek et al., 2020; Conneau et al., 2020; Brown et al., 2020; Gao et al., 2020; Raffel et al., 2020). However, the quality of raw Common Crawl data is often insufficient to be directly used. To combat this, many existing works use some kind of proxy for quality, like a classifier between known high quality data and low quality data (Brown et al., 2020; Gao et al., 2020; Zeng et al., 2021), handcrafted heuristics (Yang et al., 2020; Raffel et al., 2020), or keeping only documents with perplexity scores that fall in some middle quantile of an existing language model (Wenzek et al., 2020). Brown et al. (2020) in particular filter extremely aggres- sively using their classifier, discarding about 98.7% of their data. Previous work has shown that models trained on heuristic-filtered datasets perform better on downstream tasks (Raffel et al., 2020). However, Gao et al. (2020) show that a perplexity-filtered CC- derived dataset actually performs worse than unfiltered CC on certain tasks. [...] We hypothesize that this decline in performance is because of misalignment between the classifier objective, intended to be a proxy for quality, and actual document quality. For instance, a classifier to distinguish WebText2 from Common Crawl, as in GPT-3, would also exclude domains of text data not found as often in WebText2.","","","","" "Abeba Birhane, Vinay Uday Prabhu, Emmanuel Kahembwe – University College Dublin, Ireland; Lero, Dublin, Ireland; University of Edinburgh, UK","Multimodal datasets: misogyny, pornography, and malignant stereotypes","https://arxiv.org/abs/2110.01963","papers","20210101Z00:00:00","","We have now entered the era of trillion parameter machine learning models trained on billion-sized datasets scraped from the internet. The rise of these gargantuan datasets has given rise to formidable bodies of critical work that has called for caution while generating these large datasets. These address concerns surrounding the dubious curation practices used to generate these datasets, the sordid quality of alt-text data available on the world wide web, the problematic content of the CommonCrawl dataset often used as a source for training large language models, and the entrenched biases in large-scale visio-linguistic models (such as OpenAI's CLIP model) trained on opaque datasets (WebImageText). In the backdrop of these specific calls of caution, we examine the recently released LAION-400M dataset, which is a CLIP-filtered dataset of Image-Alt-text pairs parsed from the Common-Crawl dataset. We found that the dataset contains, troublesome and explicit images and text pairs of rape, pornography, malign stereotypes, racist and ethnic slurs, and other extremely problematic content. We outline numerous implications, concerns and downstream harms regarding the current state of large scale datasets while raising open questions for various stakeholders including the AI community, regulators, policy makers and data subjects.","University College Dublin, Ireland; Lero, Dublin, Ireland; University of Edinburgh, UK","ai/ethics-of-machine-learning, nlp/corpus-construction, nlp/text-corpora, nlp/multimodal-corpora","1.3 The Common-Crawl Common Crawl is a San Francisco based nonprofit 501(c)(3) organization that has been regularly crawling the entire WWW and generating archival snapshot data-dumps, often termed the Common- Crawl (CC) datasets in machine learning lexicon, since 2011. The current version of this archive (dated April 2021) is roughly 320 TB in size and spans 3.1 billion pages. The sheer scale of this dataset has an enduring allure in the AI community and has been used as a seeding dataset in training pipelines of high-profile projects⁵ [⁵https://commoncrawl.org/the-data/examples/] such as GPT-3 [34], CLUECorpus2020 [35], and XLM-R [36]. Inevitably this gargantuan dataset mined from the WWW suffers from serious issues. For instance, Matic et al. [37] used the Curlie.org crowdsourced taxonomy project to train a GDPR-Article(9)-Sensitive-URL classifier which revealed that, of the 1 Billion URLs they audited in the Common Crawl project, 155 million URLs fell into the sensitive category. The Realtoxicityprompts work [38] revealed that CommonCrawl contained over 300,000 documents from unreliable news sites and banned subReddit pages containing hate speech and racism. More recently, Luccioni and Viviano’s initial study [39] placed the ‘Hate speech’ content level to be around 4.02%-5.24% (the 1+ hate n-grams level was estimated higher at 17.78%). With regards to CCAligned, a 119- language parallel dataset built off 68 snapshots of Common Crawl, Caswell et al. [40] revealed that there were notable amounts of pornographic content (> 10%) found for 11 languages with prevalence rates being as high as 24% for language pairs such as en-om_KE. The LAION-400M dataset emerges from this landscape containing hundreds of millions of Image- Alt-text pairs parsed from the Common-Crawl dataset and filtered using a previously Common-Crawl trained AI model (CLIP [2]). With this background, we present our findings following our initial audit of the LAION-400M dataset below.","","","","" "Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, Aran Komatsuzaki – LAION.ai; Gentec Data, Romania; Technical University of Munich, Germany; Juelich Supercomputing Center, Germany; Georgia Institute of Technology; USA; EleutherAI","LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs","https://arxiv.org/abs/2111.02114","papers","20210101Z00:00:00","","Multi-modal language-vision models trained on hundreds of millions of image-text pairs (e.g. CLIP, DALL-E) gained a recent surge, showing remarkable capability to perform zero- or few-shot learning and transfer even in absence of per-sample labels on target image data. Despite this trend, to date there has been no publicly available datasets of sufficient scale for training such models from scratch. To address this issue, in a community effort we build and release for public LAION-400M, a dataset with CLIP-filtered 400 million image-text pairs, their CLIP embeddings and kNN indices that allow efficient similarity search.","LAION.ai; Gentec Data, Romania; Technical University of Munich, Germany; Juelich Supercomputing Center, Germany; Georgia Institute of Technology; USA; EleutherAI","nlp/corpus-construction, nlp/multimodal-corpora","2.1 Distributed processing of Common Crawl¶ To create image-text pairs, we parse through WAT files from Common Crawl and parse out all HTML IMG tags containing an alt-text attribute. We download the raw images from the parsed URLs with asynchronous requests using Trio and Asks libraries.¶ 2.1.1 Filtering out unsuitable image-text pairs¶ After downloading the WAT files from Common Crawl, we apply the following filtering conditions: • All samples with less than 5 character alt-text length or less than 5 KB image size are dropped.¶ • Duplicate removal is performed with bloom filter based on URL and alt-text.¶ • We use CLIP to compute embeddings of the image and alt-text. Then we compute the cosine similarity of both embeddings and drop all samples with cosine similarity below 0.3. This threshold was selected based on human inspections.¶ • We use the CLIP embeddings of images and texts to filter out illegal contents.","","LAION-400M","","" "Michael Bugert, Iryna Gurevych – Ubiquitous Knowledge Processing Lab (UKP), Technical University of Darmstadt, Germany","Event Coreference Data (Almost) for Free: Mining Hyperlinks from Online News","https://aclanthology.org/2021.emnlp-main.38","papers","20210101Z00:00:00","","Cross-document event coreference resolution (CDCR) is the task of identifying which event mentions refer to the same events throughout a collection of documents. Annotating CDCR data is an arduous and expensive process, explaining why existing corpora are small and lack domain coverage. To overcome this bottleneck, we automatically extract event coreference data from hyperlinks in online news: When referring to a significant real-world event, writers often add a hyperlink to another article covering this event. We demonstrate that collecting hyperlinks which point to the same article(s) produces extensive and high-quality CDCR data and create a corpus of 2M documents and 2.7M silver-standard event mentions called HyperCoref. We evaluate a state-of-the-art system on three CDCR corpora and find that models trained on small subsets of HyperCoref are highly competitive, with performance similar to models trained on gold-standard data. With our work, we free CDCR research from depending on costly human-annotated training data and open up possibilities for research beyond English CDCR, as our data extraction approach can be easily adapted to other languages.","Ubiquitous Knowledge Processing Lab (UKP), Technical University of Darmstadt, Germany","nlp/coreference resolution, event detection","To this end, we devise a data extraction pipeline which mines such datasets automatically from Common Crawl² [²https://commoncrawl.org/] and apply it to create the HYPERCOREF corpus, consisting of 40 news outlets with over 2M mentions in total, far exceeding the size of existing CDCR corpora.","","","","" "Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Dmytro Okhonko, Samuel Broscheit, Gautier Izacard, Patrick Lewis, Barlas Oğuz, Edouard Grave, Wen-tau Yih, Sebastian Riedel – Facebook AI Research; University College London, United Kingdom; University of Mannheim, Germany; ENS, PSL University, France; Inria, France; University of Washington, United States","The Web Is Your Oyster - Knowledge-Intensive NLP against a Very Large Web Corpus","https://arxiv.org/abs/2112.09924","papers","20210101Z00:00:00","Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Information Retrieval (cs.IR), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences","In order to address increasing demands of real-world applications, the research for knowledge-intensive NLP (KI-NLP) should advance by capturing the challenges of a truly open-domain environment: web-scale knowledge, lack of structure, inconsistent quality and noise. To this end, we propose a new setup for evaluating existing knowledge intensive tasks in which we generalize the background corpus to a universal web snapshot. We investigate a slate of NLP tasks which rely on knowledge - either factual or common sense, and ask systems to use a subset of CCNet - the Sphere corpus - as a knowledge source. In contrast to Wikipedia, otherwise a common background corpus in KI-NLP, Sphere is orders of magnitude larger and better reflects the full diversity of knowledge on the web. Despite potential gaps in coverage, challenges of scale, lack of structure and lower quality, we find that retrieval from Sphere enables a state of the art system to match and even outperform Wikipedia-based models on several tasks. We also observe that while a dense index can outperform a sparse BM25 baseline on Wikipedia, on Sphere this is not yet possible. To facilitate further research and minimise the community's reliance on proprietary, black-box search engines, we share our indices, evaluation metrics and infrastructure.","Facebook AI Research; University College London, United Kingdom; University of Mannheim, Germany; ENS, PSL University, France; Inria, France; University of Washington, United States","nlp/question-answering, nlp/knowledge-intensive-tasks, ai/knowledge-base","[…] CCNet processes Common Crawl by performing deduplication, language identification and quality filtering (articles are split into three quality tiers: head, […] We pick the CCNet snapshot corresponding to the August 2019 Common Crawl […]","","","CCNet","" "Metod Jazbec, Barna Pàsztor, Felix Faltings, Nino Antulov-Fantulin, Petter N. Kolm – ETH Zurich, Switzerland; New York University, New York, USA","On the Impact of Publicly Available News and Information Transfer to Financial Markets","https://royalsocietypublishing.org/doi/10.1098/rsos.202321","papers","20210101Z00:00:00","","We quantify the propagation and absorption of large-scale publicly available news articles from the World Wide Web to financial markets. To extract publicly available information, we use the news archives from the Common Crawl, a non-profit organization that crawls a large part of the web. We develop a processing pipeline to identify news articles associated with the constituent companies in the S&P 500 index, an equity market index that measures the stockperformance of US companies. Using machine learning techniques, we extract sentiment scores from the Common Crawl News data and employ tools from information theory to quantify the information transfer from public news articles to the US stock market. Furthermore, we analyse and quantify the economic significance of the news-based information with a simple sentiment-based portfolio trading strategy. Our findings provide support for that information in publicly available news on the World Wide Web has a statistically and economically significant impact on events infinancial markets.","ETH Zurich, Switzerland; New York University, New York, USA","statistical-finance, ai/machine-learning, nlp/sentiment-analysis, financial-markets","","","","","" "Daniel Varab, Natalie Schluter – IT University of Copenhagen, Denmark","MassiveSumm: a very large-scale, very multilingual, news summarisation dataset","https://aclanthology.org/2021.emnlp-main.797","papers","20210101Z00:00:00","","Current research in automatic summarisation is unapologetically anglo-centered{--}a persistent state-of-affairs, which also predates neural net approaches. High-quality automatic summarisation datasets are notoriously expensive to create, posing a challenge for any language. However, with digitalisation, archiving, and social media advertising of newswire articles, recent work has shown how, with careful methodology application, large-scale datasets can now be simply gathered instead of written. In this paper, we present a large-scale multilingual summarisation dataset containing articles in 92 languages, spread across 28.8 million articles, in more than 35 writing scripts. This is both the largest, most inclusive, existing automatic summarisation dataset, as well as one of the largest, most inclusive, ever published datasets for any NLP task. We present the first investigation on the efficacy of resource building from news platforms in the low-resource language setting. Finally, we provide some first insight on how low-resource language settings impact state-of-the-art automatic summarisation system performance.","IT University of Copenhagen, Denmark","nlp/text-summarization, nlp/corpus-construction","Comparing with web-scrape multilingual datasets. We compared the intersection of our dataset with two large-scale web datasets widely used by the NLP community: Wikipedia⁴ [⁴https://en.wikipedia.org/wiki/List_of_Wikipedias#Edition_details as of May 10 2021] and Common Crawl⁵ [⁵April 2021 crawl CC-MAIN-2021-04 https://commoncrawl.github.io/cc-crawl-statistics/plots/languages.csv]. An overview of this comparison can be found in Table 4. The manual care that we took in curating the list of platforms from which we wanted to collect data resulted in more data from an improved diversity of languages. For 52 of our languages, MS-All either matches or surpasses the number of Wikipedia pages for the language in question, showing the importance of the full dataset simply as raw data. In fact, the majority of MassiveSumm languages from South Saharan Africa (14/18) have more documents in MS-All than in Wikipedia. And well over half of the MassiveSumm languages for Eurasia (38/63) have more documents in MS-All than in Wikipedia. Turning to Common Crawl, almost half of the languages from South Saharan Africa (8/18) have more pages in MS-All than in Common Crawl. Six out of 63 Eurasian languages have more articles in MS-All than in Common Crawl. When we consider even just the heavily filtered automatic summarisation portion of the data, MS, we find that 10 of the South Saharan African lan- guages contain more pages than Wikipedia, and 5 out of 18 of these languages contain more data than Common Crawl. For Eurasia, 19 of the 63 languages contain more pages than Wikipedia. Table 5 gives the proportions of the articles in MS-All that are also contained in Common Crawl, for those languages where more than 49\% can be obtained. This is 18 languages–around a fifth of the languages represented by MassiveSumm. Hence observe that large portions of easily indexible and crawlable, publicly available, diverse linguistic data are not being scraped into one of the most important datasets for NLP, both in size, but in determining to a large extent which languages get mainstream NLP research: Common Crawl. 5 Reflections on Low-Resource Language Automatic Summarisation The central datasets for automatic summarisation have consistently been for English. In this section we consider how this focus on English has resulted in limited dataset curation methodology development (Section 5.1) and limited automatic summarisation system design (Section 5.2).","","","","" "Sebastian Nagel – Common Crawl","From web graphs to prioritizing web crawls","https://doi.org/10.5281/zenodo.6044920","papers","20210101Z00:00:00","","","Common Crawl","web crawling, web-science/hyperlinkgraph","","","","","" "Simran Khanuja, Diksha Bansal, Sarvesh Mehtani, Savya Khosla, Atreyee Dey, Balaji Gopalan, Dilip Kumar Margam, Pooja Aggarwal, Rajiv Teja Nagipogu, Shachi Dave, Shruti Gupta, Subhash Chandra Bose Gali, Vish Subramanian, Partha Talukdar – Google; Indian Institute of Technology, Patna, India; Indian Institute of Technology, Bombay, India; Delhi Technological University, India","MuRIL: Multilingual Representations for Indian Languages","https://arxiv.org/abs/2103.10730","papers","20210101Z00:00:00","","","Google; Indian Institute of Technology, Patna, India; Indian Institute of Technology, Bombay, India; Delhi Technological University, India","nlp/language-model, nlp/corpus-construction","Monolingual Data: We collect monolingual data for the 17 languages mentioned above from the Common Crawl OSCAR corpus¹ and Wikipedia².","","","OSCAR","" "Michael Völske, Janek Bevendorff, Johannes Kiesel, Benno Stein, Maik Fröbe, Matthias Hagen, Martin Potthast – Bauhaus-Universität Weimar, Germany; Martin-Luther-Universität Halle-Wittenberg, Germany; Leipzig University, Germany","Web Archive Analytics","https://dl.gi.de/handle/20.500.12116/34759","papers","20210101Z00:00:00","","Web archive analytics is the exploitation of publicly accessible web pages and their evolution for research purposes—to the extent organizationally possible for researchers. In order to better understand the complexity of this task, the first part of this paper puts the entirety of the world's captured, created, and replicated data (the “Global Datasphere”) in relation to other important data sets such as the public internet and its web pages, or what is preserved thereof by the Internet Archive. Recently, the Webis research group, a network of university chairs to which the authors belong, concluded an agreement with the Internet Archive to download a substantial part of its web archive for research purposes. The second part of the paper in hand describes our infrastructure for processing this data treasure: We will eventually host around 8 PB of web archive data from the Internet Archive and Common Crawl, with the goal of supplementing existing large scale web corpora and forming a non-biased subset of the 30 PB web archive at the Internet Archive.","Bauhaus-Universität Weimar, Germany; Martin-Luther-Universität Halle-Wittenberg, Germany; Leipzig University, Germany","web-archiving, big data, data processing","In the Webis research group, we aim to store up to 8 PB of web archive data on our own premises, much of it originating from the Internet Archive, but also from other sources, such as the Common Crawl. [...] As of October 2020, almost 2.3 PB of data—of which 560 TB stem from the Internet Archive and the rest from the Common Crawl—have been downloaded and are stored on our infrastructure.","","","","" "Vésteinn Snæbjarnarson, Haukur Barri Símonarson, Pétur Orri Ragnarsson, Svanhvít Lilja Ingólfsdóttir, Haukur Páll Jónsson, Vilhjálmur Þorsteinsson, Hafsteinn Einarsson – Miðeind ehf., Iceland; University of Iceland, Iceland","A Warm Start and a Clean Crawled Corpus -- A Recipe for Good Language Models","https://arxiv.org/abs/2201.05601","papers","20220101Z00:00:00","","We train several language models for Icelandic, including IceBERT, that achieve state-of-the-art performance in a variety of downstream tasks, including part-of-speech tagging, named entity recognition, grammatical error detection and constituency parsing. To train the models we introduce a new corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection of high quality texts found online by targeting the Icelandic top-level-domain (TLD). Several other public data sources are also collected for a total of 16GB of Icelandic text. To enhance the evaluation of model performance and to raise the bar in baselines for Icelandic, we translate and adapt the WinoGrande dataset for co-reference resolution. Through these efforts we demonstrate that a properly cleaned crawled corpus is sufficient to achieve state-of-the-art results in NLP applications for low to medium resource languages, by comparison with models trained on a curated corpus. We further show that initializing models using existing multilingual models can lead to state-of-the-art results for some downstream tasks.","Miðeind ehf., Iceland; University of Iceland, Iceland","nlp/corpus-construction, nlp/language-model","3.1. The Icelandic Common Crawl Corpus¶ The Common Crawl Foundation is a non-profit organization that scrapes large semi-random subsets of the internet regularly and hosts timestamped and compressed dumps of the web online¹⁰ [¹⁰https://commoncrawl.org/the-data/get-started/]. Each dump contains billions of web pages occupying hundreds of terabytes. Parsing these files directly requires storage and computing power not directly available to most and can come at a significant financial cost. The foundation also hosts indices of URIs and their locations within the large zipped dump files. While these indices are also large, their processing is feasible with a few terabytes of storage.¶ 3.1.1. Extracting Icelandic Common Crawl data¶ The Common Crawl indices, which contain URI and byte offsets within the compressed dumps, are used to reduce the search space when looking for Icelandic texts. The Common Crawl Index Server has a public API¹¹ [¹¹https://index.commoncrawl.org/] where URIs can be queried based on attributes such as date, MIME-type and substring. Using the API eliminates the need to fetch the massive index files. To extract Icelandic, the .is pattern is targeted to match the Icelandic top level domain (TLD), resulting in 63.5 million retrieved pages with URIs and byte locations within the compressed Common Crawl dumps. The computational efficiency of our method can be attributed to these steps. Given the predominant use of the .is TLD for Icelandic web content, we assume that other TLDs have a much lower proportion of Icelandic content. That said, a nontrivial amount of text in Icelandic is still likely to be found outside the .is domain and could be extracted by, e.g., parsing the whole Common Crawl, albeit at a much higher computational cost.¶ By targeting only the byte-offsets corresponding to the Icelandic TLD we extract candidate websites that have a high proportion of Icelandic content. In total, the compressed content is 687GiB on disk. All dumps since the start of the Common Crawl in 2008 until March 2020 were included.¶ Plain text was extracted from the collected WARC (Web Archive format) files using jusText (Pomikálek, 2011)12 to remove boilerplate content and HTML tags.","CDX, WARC, ARC 2008 – March 2020","","","" "Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri, Olatz Perez-de-Viñaspre, Aitor Soroa – Meta AI; HiTZ Center - Ixa, University of the Basque Country UPV/EHU","Does Corpus Quality Really Matter for Low-Resource Languages?","https://arxiv.org/abs/2203.08111","papers","20220101Z00:00:00","Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences","The vast majority of non-English corpora are derived from automatically filtered versions of CommonCrawl. While prior work has identified major issues on the quality of these datasets (Kreutzer et al., 2021), it is not clear how this impacts downstream performance. Taking Basque as a case study, we explore tailored crawling (manually identifying and scraping websites with high-quality content) as an alternative to filtering CommonCrawl. Our new corpus, called EusCrawl, is similar in size to the Basque portion of popular multilingual corpora like CC100 and mC4, yet it has a much higher quality according to native annotators. For instance, 66% of documents are rated as high-quality for EusCrawl, in contrast with <33% for both mC4 and CC100. Nevertheless, we obtain similar results on downstream tasks regardless of the corpus used for pre-training. Our work suggests that NLU performance in low-resource languages is primarily constrained by the quantity rather than the quality of the data, prompting for methods to exploit more diverse data sources.","Meta AI; HiTZ Center - Ixa, University of the Basque Country UPV/EHU","nlp/corpus-construction, nlp/corpus-representativeness, nlp/corpus-quality, nlp/language-models, nlp/low-resource-languages","In this paper, we explore tailored crawling (i.e., manually identifying and scraping websites with high-quality content) as an alternative to filtering CommonCrawl. Taking Basque as a case study, we collect 12.5M documents from 33 websites with Creative Commons content. The resulting corpus, called EusCrawl, is similar in size to the Basque portion of CC100 and mC4, but it has substantially less issues and a higher perceived quality according to our blind audit with native annotators. However, we find that this improvement does not carry over to downstream tasks, as masked language models pre-trained on either corpora obtain similar results on 5 NLU benchmarks. Our results suggests that data quantity and domain play a more important role, prompting for methods to exploit more diverse sources of data in low-resource languages.","","","","" "Stella Biderman, Kieran Bicheno, Leo Gao – EleutherAI","Datasheet for the Pile","https://arxiv.org/abs/2201.07311","papers","20220101Z00:00:00","","This datasheet describes the Pile, a 825 GiB dataset of human-authored text compiled by EleutherAI for use in large-scale language modeling. The Pile is comprised of 22 different text sources, ranging from original scrapes done for this project, to text data made available by the data owners, to third-party scrapes available online.","EleutherAI","nlp/corpus-construction, nlp/corpus-datasheet, nlp/corpus-representativeness","Pile-CC: The Pile-CC dataset is a sample from the Common Crawl WARCs that has been converted to text using jusText [Endrédy and Novák, 2013].¶ [...] Pile-CC: The Pile-CC dataset was created to be included in the Pile. The underlying data comes from the Common Crawl, which was created to give people access to the wealth of information contained in the internet. Its creators were concerned that only data mining companies would be able to collect this data, and has the explicit aim of democratizing technology.¶ [...] Pile-CC: The data is sourced from Common Crawl, a non-profit 501(c)(3) organization founded by Gil Elbaz. The data from Common Crawl was processed by EleutherAI into Pile-CC.¶ [...] Pile-CC: Instances are webpages.¶ [...] Pile-CC: 54, 953, 117 documents, totaling 227.12 GiB.¶ [...] Pile-CC: A tiny fraction of the entire Common Crawl was included, chosen arbitrarily and heavily filtered as detailed in Gao et al. [2020].¶ [...] Pile-CC: Data in the Pile-CC dataset were scraped from websites by the Common Craw and then downloaded directly from the Common Craw by EleutherAI.¶ [...] Pile-CC: The earliest date of contents in Pile-CC is unknown.¶","","The-Pile-English","","" "Jonas Andersson Schwarz – Göteborgs Universitet, Sweden","The hitchhiker's guide Method handbook for quantification of online linguistic data in a country-specific context. Official research report, Linguistic Explorations of Societies (Work Package 1)","https://gupea.ub.gu.se/bitstream/handle/2077/70890/2022_1_Andersson%20Schwarz.pdf","papers","20220101Z00:00:00","","","Göteborgs Universitet, Sweden","nlp/corpus-construction, nlp/corpus-representativeness","Central actors (in no particular order)¶ CommonCrawl. California-based non-profit organization that makes monthly crawls of the openly available Web and provides datasets and metadata to the public freely. The CommonCrawl corpus contains petabytes of data including raw web page data, metadata data and text data collected since 2011. Since 2012, CommonCrawl’s archive is hosted by Amazon Web Services as part of its Public Data Sets program. Every crawl contains around 300 terabytes of data and roughly 3 billion pages. In 2020, a filtered version of this CommonCrawl archive was used to train OpenAI’s GPT-3 language model.¶ [...] Similarly, CommonCrawl (2021) provides an aggregate listing the percentages of their database covered by each language – measured as the primary language of each html document, as identified by the Compact Language Detector 2 (CLD2) algorithm. This was included as a good benchmark to compare with.¶ [...] In comparison, when plotting the cur- rently stated language distribution of CommonCrawl (2021) in relation to the same population numbers of L1 and L2 speakers, the CommonCrawl distribution displays a similarly low kurtosis and skewness.","","","","" "Makoto Morishita, Katsuki Chousa, Jun Suzuki, Masaaki Nagata – NTT Communication Science Laboratories, NTT Corporation, Japan","JParaCrawl v3.0: A Large-scale English-Japanese Parallel Corpus","https://arxiv.org/abs/2202.12607","papers","20220101Z00:00:00","","Most current machine translation models are mainly trained with parallel corpora, and their translation accuracy largely depends on the quality and quantity of the corpora. Although there are billions of parallel sentences for a few language pairs, effectively dealing with most language pairs is difficult due to a lack of publicly available parallel corpora. This paper creates a large parallel corpus for English-Japanese, a language pair for which only limited resources are available, compared to such resource-rich languages as English-German. It introduces a new web-based English-Japanese parallel corpus named JParaCrawl v3.0. Our new corpus contains more than 21 million unique parallel sentence pairs, which is more than twice as many as the previous JParaCrawl v2.0 corpus. Through experiments, we empirically show how our new corpus boosts the accuracy of machine translation models on various domains. The JParaCrawl v3.0 corpus will eventually be publicly available online for research purposes.","NTT Communication Science Laboratories, NTT Corporation, Japan","nlp/machine-translation, nlp/parallel-corpus, nlp/corpus-construction","Our method extracts parallel sentences from the web. Thus, the first step is finding a website that has parallel sentences. This method is based on the hypothesis that websites containing the same English and Japanese sentences might have parallel texts. To list such parallel websites, we analyzed all the Common Crawl text archive data released from March 2019 to August 2021³. [³During this period, the Common Crawl project released 25 archives, and their text size was about 212 TB.] We identified the language in the archive by CLD2⁴ [⁴ https://github.com/CLD2Owners/cld2] and listed 100,000 large websites that roughly have the same size of English and Japanese texts. For this step, we used extractor⁵ [⁵ 5https://github.com/paracrawl/extractor] that was provided by the ParaCrawl project.","","","","" "Imad LAKIM, Ebtesam Almazrouei, Ibrahim Abu Alhaol, Merouane Debbah, Julien Launay – TII, Abu Dhabi, Arabic Emirates; LightOn, Paris, France","A Holistic Assessment of the Carbon Footprint of Noor, a Very Large Arabic Language Model","https://openreview.net/forum?id=B-lS3zH8Zq","papers","20220101Z00:00:00","","As ever larger language models grow more ubiquitous, it is crucial to consider their environmental impact. Characterised by extreme size and resource use, recent generations of models have been criticised for their voracious appetite for compute, and thus significant carbon footprint. Although reporting of carbon impact has grown more common in machine learning papers, this reporting is usually limited to compute resources used strictly for training. In this work, we propose a holistic assessment of the footprint of an extreme-scale language model, Noor. Noor is an ongoing project aiming to develop the largest multi-task Arabic language models--with up to 13B parameters--leveraging zero-shot generalisation to enable a wide range of downstream tasks via natural language instructions. We assess the total carbon bill of the entire project: starting with data collection and storage costs, including research and development budgets, pretraining costs, future serving estimates, and other exogenous costs necessary for this international cooperation. Notably, we find that inference costs and exogenous factors can have a significant impact on total budget. Finally, we discuss pathways to reduce the carbon footprint of extreme-scale models.","TII, Abu Dhabi, Arabic Emirates; LightOn, Paris, France","nlp/language-model, nlp/transformer-language-model, carbon-footprint","We use Common Crawl (CC) for acquiring large amounts of web data. Each CC dump is on average around 10TB, and we discard it immediately after processing it. On average, it takes 24 hours to fully process a dump: we used 21 dumps from CC, meaning we stored 210TB of data for 24hours, equivalent to 57 kWh of energy consumption. After processing the dumps, we got on average 1.2TB of data per dump, thus 25TB in total. Considering that this data will be stored for 6 months, we end up with 1.3 MWh of energy consumption for the bulk data. Note that we keep the processed data in all languages (not just Modern Standard Arabic).","","","","" "Asier Gutiérrez-Fandiño, David Pérez-Fernández, Jordi Armengol-Estapé, David Griol, Zoraida Callejas – LHF Labs; Universidad Autónoma de Madrid, Spain; University of Edinburgh, United Kingdom; Universidad de Granada, Spain","esCorpius: A Massive Spanish Crawling Corpus","https://ui.adsabs.harvard.edu/abs/2022arXiv220615147G","papers","20220101Z00:00:00","Computer Science - Computation and Language, Computer Science - Artificial Intelligence","","LHF Labs; Universidad Autónoma de Madrid, Spain; University of Edinburgh, United Kingdom; Universidad de Granada, Spain","nlp/corpus-construction, nlp/text-corpora","[…] In this paper, we introduce esCorpius, a Spanish crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in Spanish with this level of quality in the extraction, purification and deduplication of web textual content […] A total of 39,502 compressed WARC (Web Archive) from Common Crawl files were processed (see section 3.3 for more details). The compressed information occupied about 180 TB and the size of the processed decompressed information is estimated to be more than 0.8 PB. Prior to content deduplication, the downloaded corpus was composed of 106.768.594.753 words, 3.129.248.875 lines and 163.518.405 web pages. The deduplicated and cleaned corpus size is 346.262.072.705 bytes (322.5 GB), with 104.073.706 total number of lines, 50.040.055.322 tokens, 1.125.798.968 paragraphs and 2.421.598.201 sentences.","","","","" "Arnold Overwijk, Chenyan Xiong, Jamie Callan – Microsoft; Carnegie Mellon University","ClueWeb22: 10 Billion Web Documents with Rich Information","https://doi.org/10.1145/3477495.3536321","papers","20220101Z00:00:00","clueweb, web corpus, dataset","ClueWeb22, the newest iteration of the ClueWeb line of datasets, is the result of more than a year of collaboration between industry and academia. Its design is influenced by the research needs of the academic community and the real-world needs of large-scale industry systems. Compared with earlier ClueWeb datasets, the ClueWeb22 corpus is larger, more varied, and has higher-quality documents. Its core is raw HTML, but it includes clean text versions of documents to lower the barrier to entry. Several aspects of ClueWeb22 are available to the research community for the first time at this scale, for example, visual representations of rendered web pages, parsed structured information from the HTML document, and the alignment of document distributions (domains, languages, and topics) to commercial web search.This talk shares the design and construction of ClueWeb22, and discusses its new features. We believe this newer, larger, and richer ClueWeb corpus will enable and support a broad range of research in IR, NLP, and deep learning.","Microsoft; Carnegie Mellon University","cc-cited-not-used, nlp/corpus-construction, nlp/text-corpora, information-retrieval","One approach is to sift CommonCrawl data, eg, the C4 dataset used to pretrain T5 [10], which provides sufficient quantity, but the quality quickly becomes a concern. For example, the cleaned CommonCrawl reflects a quite weird distribution of the web [5]. Language models pretrained on C4 often perform worse than models pretrained on higher quality corpora at the same scale. With ClueWeb22, we aim to provide the web corpus for research in the near future. The design of ClueWeb22 emphasizes on these goals: 1) to reflect the distribution of the web in real scenarios; 2) to provide web pages at large quantity and also with high quality; 3) to enable new research directions by including information important in industry but previously not publicly available.","","","","" "Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, Luke Zettlemoyer – Meta AI","OPT: Open Pre-trained Transformer Language Models","https://arxiv.org/abs/2205.01068","papers","20220101Z00:00:00","","","Meta AI","nlp/language-model, nlp/transformer-language-model, nlp/corpus-construction","","","","CC-Stories, Pile-CC, CC-NEWS-RoBERTa-v2","" "Sylvain Lugeon, Tiziano Piccardi, Robert West – EPFL, Switzerland","Homepage2Vec: Language-Agnostic Website Embedding and Classification","https://ojs.aaai.org/index.php/ICWSM/article/download/19380/19152","papers","20220101Z00:00:00","","Top-level domain. Some top-level domains (TLD) such as .edu or .biz can offer a good hint about the website's content. For example, a typical use case for .edu is university websites, whereas .biz is commonly associated with business activities. Following this intuition, we collected from Common Crawl,5 a large-scale sample of the Web, the 19 most frequent TLDs: .com, .org, .net, .info, .xyz, .club, .biz, .top, .edu, .online, .pro, .site, .vip, .icu, .buzz, .app, .asia, .gov, .space, excluding the country code TLD (ccTLD) because they indicate geographic origin, not website content. We represent this feature with a one-hot encoding vector of 19 dimensions.","EPFL, Switzerland","nlp/text-classification, web-site-classification","","","","","" "Johannes Zirngibl, Steffen Deusch, Patrick Sattler, Juliane Aulbach, Georg Carle, Mattijs Jonker – Technical University of Munich, Germany; University of Twente, The Netherlands","Domain Parking: Largely Present, Rarely Considered!","https://mediatum.ub.tum.de/1661842","papers","20220101Z00:00:00","","Domain parking typically involves leveraging advertisements to generate revenue on otherwise inactive domain names. Their content is rarely of real value to users and tends to be highly similar across parked domains. They have commonalities beyond content alone: parked domains can share hosting and DNS infrastructure. Parking rarely receives special treatment in existing studies (e.g., content analyses or infrastructure concentration studies). While the presence and possible bias introduced by parked pages is sometimes acknowledged in studies, the studies still treat parked domains as any other, either because differentiation is infeasible, or because doing so is considered out-of-scope. We argue that the impact of parked domains on analyses regarding the current state and future development of the Internet should not be overlooked. In this paper, we motivate this argument through quantification, and take steps towards helping other researchers identify parked domains. We systematically collect a list of 82 parking services and develop DNS-based indicators to help identify parked domains. We next quantify the presence of parked domains, using large-scale DNS data containing hundreds of millions of registered domain names, representative for a significant part of the global DNS namespace. Overall, we pinpoint 60 M parked domains, which is a significant percentage of all names under consideration (23 %) and identify up to 4 % of domains from top lists to be parked. These findings demonstrate that the effect of parked pages is potentially pronounced. We also break down into the various parking services and DNS zones. This helps us demonstrate and further discuss the effect that domain parking can have on research and Internet consolidation.","Technical University of Munich, Germany; University of Twente, The Netherlands","web-science, internet/DNS, internet/domain-parking","Common Crawl While visual identification allowed us to validate the inferences to a reasonable extent, we wanted to upscale validation. Therefore, we consider Common Crawl (CC) data [21] [C. Crawl. (2022) The Common Crawl Corpus. [Online]. Available: https://commoncrawl.org/] and calculate the similarity of pages. Common Crawl is an open repository of web crawl data, collected at monthly intervals, accounting for hundreds of millions of unique domain names, and many more URLs. We consider CC data for Jan 2022 and the ∼60 M parked domains that we identify on Jan 28th, 2022. We extract the HTML content of parked pages from CC data, only considering URLs that contain exactly the registered domain. Furthermore, we require the crawl target to have been the landing page (i.e., the path of the URL is /) and also to have resulted in a useful response (i.e., HTTP status code of 200). Given these filters, ∼1.29 M HTML rich responses can be obtained. We extract visible text and tokenize it into words, remove stop words, apply lemmatization, and create a vector for the most-frequently used words for each page.","","","","" "Alexandra Sasha Luccioni, Frances Corry, Hamsini Sridharan, Mike Ananny, Jason Schultz, Kate Crawford – Hugging Face; University of Southern California, USA; New York University, USA; Microsoft Research, USA","A Framework for Deprecating Datasets: Standardizing Documentation, Identification, and Communication","https://doi.org/10.1145/3531146.3533086","papers","20220101Z00:00:00","datasets, data stewardship data management dataset deprecation","Datasets are central to training machine learning (ML) models. The ML community has recently made significant improvements to data stewardship and documentation practices across the model development life cycle. However, the act of deprecating, or deleting, datasets has been largely overlooked, and there are currently no standardized approaches for structuring this stage of the dataset life cycle. In this paper, we study the practice of dataset deprecation in ML, identify several cases of datasets that continued to circulate despite having been deprecated, and describe the different technical, legal, ethical, and organizational issues raised by such continuations. We then propose a Dataset Deprecation Framework that includes considerations of risk, mitigation of impact, appeal mechanisms, timeline, post-deprecation protocols, and publication checks that can be adapted and implemented by the ML community. Finally, we propose creating a centralized, sustainable repository system for archiving datasets, tracking dataset modifications or deprecations, and facilitating practices of care and stewardship that can be integrated into research and publication processes.","Hugging Face; University of Southern California, USA; New York University, USA; Microsoft Research, USA","ai/ethics-of-machine-learning, nlp/text-corpora, nlp/corpus-construction, cc-cited-not-used","When it comes to filtering large text datasets scraped from the Web, given their sheer size (C4 represents 2.3 TB of data, whereas the Common Crawl has 139TB), filtering them is complex and time-consuming, although approaches have been proposed for reducing duplicates and train-test overlap [53]. [...] In practice, documenting and deprecating these datasets is akin to a game of whack-a-mole, since new versions of the Common Crawl come out every few months. Analyzing what they contain and their degrees of contamination through common evaluation tasks would take significant effort.","","","","" "Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera, Annette Rios, Isabel Papadimitriou, Salomey Osei, Pedro Ortiz Suarez, Iroro Orife, Kelechi Ogueji, Andre Niyongabo Rubungo, Toan Q. Nguyen, Mathias Müller, André Müller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine Çabuk Ballı, Stella Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, Mofetoluwa Adeyemi – Google Research; Masakhane NLP; Turkic Interlingua; Haverford College; RobotsMali; Intel Labs; University of Zambia; Google; AIMS-AMMI; Inria; University of Zurich; Stanford University; Kwame Nkrumah University of Science and Technology; Sorbonne Université; Niger-Volta LTI; University of Waterloo; University of Electronic Science and Technology of China; University of Notre Dame; Bayero University Kano; University of South Florida; Hugging Face; Jacobs University Bremen; University of Moratuwa; EleutherAI; Obafemi Awolowo University; University of Ibadan; Instadeep; University of Maryland; Defence Space Administration Abuja","Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets","https://doi.org/10.1162/tacl\_a\_00447","papers","20220101Z00:00:00","","{With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, Web-mined text datasets covering hundreds of languages. We manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4). Lower-resource corpora have systematic issues: At least 15 corpora have no usable text, and a significant fraction contains less than 50\\% sentences of acceptable quality. In addition, many are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-proficient speakers, and supplement the human audit with automatic analyses. Finally, we recommend techniques to evaluate and improve multilingual corpora and discuss potential risks that come with low-quality data releases.}","Google Research; Masakhane NLP; Turkic Interlingua; Haverford College; RobotsMali; Intel Labs; University of Zambia; Google; AIMS-AMMI; Inria; University of Zurich; Stanford University; Kwame Nkrumah University of Science and Technology; Sorbonne Université; Niger-Volta LTI; University of Waterloo; University of Electronic Science and Technology of China; University of Notre Dame; Bayero University Kano; University of South Florida; Hugging Face; Jacobs University Bremen; University of Moratuwa; EleutherAI; Obafemi Awolowo University; University of Ibadan; Instadeep; University of Maryland; Defence Space Administration Abuja","nlp/corpus-construction, nlp/web-as-corpus, nlp/parallel-corpus, nlp/low-resource-language","We selected the corpora for their multilinguality and the inclusion of understudied languages in NLP. With the exception of WikiMatrix and Paracrawl, all corpora are derived from CommonCrawl, and distinguish themselves by the choice of filtering methods, LangID and automatic alignment technology.","","CCAligned-2020, Tensorflow-C4-Multilingual, OSCAR","","" "Julien Abadji, Pedro Ortiz Suarez, Laurent Romary, Benoît Sagot – Inria, France; Sorbonne Université, France","Towards a Cleaner Document-Oriented Multilingual Crawled Corpus","https://arxiv.org/abs/2201.06642","papers","20220101Z00:00:00","","The need for raw large raw corpora has dramatically increased in recent years with the introduction of transfer learning and semi-supervised learning methods to Natural Language Processing. And while there have been some recent attempts to manually curate the amount of data necessary to train large language models, the main way to obtain this data is still through automatic web crawling. In this paper we take the existing multilingual web corpus OSCAR and its pipeline Ungoliant that extracts and classifies data from Common Crawl at the line level, and propose a set of improvements and automatic annotations in order to produce a new document-oriented version of OSCAR that could prove more suitable to pre-train large generative language models as well as hopefully other applications in Natural Language Processing and Digital Humanities.","Inria, France; Sorbonne Université, France","nlp/corpus-construction, nlp/web-as-corpus","","","OSCAR","","" "Wang Tongjing, Zhao Yin, Ziyu Bao, Evert Meijers – Utrecht University, The Netherlands; Delft University of Technology, The Netherlands","Dataset of intercity relationships between 293 Chinese cities extracted and classified on the basis of toponym co-occurrences on Common Crawl","https://www.researchgate.net/profile/Evert-Meijers/publication/362952059_Dataset_of_intercity_relationships_between_293_Chinese_cities_extracted_and_classified_on_the_basis_of_toponym_co-occurrences_on_Common_Crawl/links/6308bfc25eed5e4bd11f7938/Dataset-of-intercity-relationships-between-293-Chinese-cities-extracted-and-classified-on-the-basis-of-toponym-co-occurrences-on-Common-Crawl.pdf","papers","20220101Z00:00:00","city networks, toponym co-occurrence, city relationship, geographical information retrieval","Although the importance of intercity relationships is theoretically acknowledged for cities’ socioeconomic development, the availability of such relational data often limits relevant urban studies. One of the new approaches of collecting city relational data is to extract the co-appearance of their place names from web texts. However, dealing with a gigantic web corpus is difficult for domain researchers given the complexities of processing terabytes of raw data. This paper develops an efficient and easy-to-follow method to extract a dataset of intercity relationships between 293 large Chinese cities applying the toponym co-occurrence method to a web archive. Our method successfully filters a 6.98 TB CC data set into a 202 GB single language text corpus. A highly-scalable Hadoop- based framework processes the full CC corpus utilizing a 1080 CPU cluster on the Amazon Elastic Map/Reduce infrastructure. To reveal more details of the intercity relationships, the intercity relationships are further classified into six categories: industry, information technology (IT), finance, research, culture, and government.","Utrecht University, The Netherlands; Delft University of Technology, The Netherlands","information retrieval, toponymy, dataset-creation","The data was retrieved from a Common Crawl raw corpus through a series of data processing. The web pages in this corpus that do not contain Chinese characteristics or Chinese placenames were filtered out based on keyword selection. The filtered Chinese corpus was 202 GB and the filtered Chinese corpus with placenames was about 139.5GB. Then we count the number of web pages where two city names co-appear. These intercity relationships were further classified into six categories using a lexicon-based classification method.","CC-MAIN-2019-18 (WET)","","","" "Per E Kummervold, Freddy Wetjen, Javier de la Rosa – National Library of Norway (NLN), Norway","The Norwegian Colossal Corpus: A Text Corpus for Training Large Norwegian Language Models","http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.410.pdf","papers","20220101Z00:00:00","","","National Library of Norway (NLN), Norway","nlp/corpus-construction","Common Crawl (2022) is a non-profit organization that has been collecting data from the web and providing these archives to the public since 2011. Common Crawl-based datasets are popular for training transformer models and are the basis for the enormous 800GB Pile dataset (Gao, 2020), among others. There are extracted Norwegian datasets that are also based on Common Crawl. The Open Super-large Crawled Aggregated coRpus (OSCAR) (Suárez et al., 2019) contains 4.7GB (800M words) of Norwegian Bokmål and 54MB (9M words) of Norwegian Nynorsk. Using a cleaned version of Common Crawl, Google compiled a multilingual version of their English colossal corpus, called MC4 (2022), for training their mT5 model (Xue et al., 2020). The Norwegian part of that dataset is roughly 94GB (14B words). Both OSCAR and the MC4 datasets have been made available on Hugging Face (2022). Unfortunately, their respective licenses do not allow for redistribution within the NCC. To overcome this limitation, we are releasing scripts for the preparation, cleaning, deduplication, and formatting of these datasets, so they can be interleaved 3855with the NCC. By combining NCC with OSCAR and MC4, it should be possible to create a deduplicated Norwegian corpus with over 100GB of text (15B words).","","","OSCAR","" "Hanlin Li, Nicholas Vincent – Northwestern University, USA","Rethinking Data Governance: A Labor-Oriented Approach","https://criticalautomation.org/wp-content/uploads/2022/03/li-vincent-data-governance.pdf","papers","20220101Z00:00:00","","The current data governance paradigm in which technology companies solely decide how user data is collected and used has introduced many issues to the tech sector. Prominent examples include information asymmetry about user data’s value, monopolistic practices enabled by data’s network effects, and power imbalance with respect to data aggregation and analysis. This work explicates how viewing users’ data-generating activities through a labor lens can help to mitigate these issues and provides corresponding design and research directions.","Northwestern University, USA","dataset-creation, data governance, user-generated content, artificial intelligence, machine learning, cc-cited-not-used","2.1 Information asymmetry about user data's value¶ The lack of transparency about user data's value helps make it possible for operators of for-profit computing systems to monetize user data and reap the bulk of its financial benefits. Currently, there exists a substantial gap between what data-driven technology companies know about user data's value and what users themselves do. For example, while social media platforms are well aware of the amount of financial benefits of user engagement, users do not have a window into how their collective attention and knowledge powers such businesses. This information asymmetry is further exacerbated by the fact that the vast majority of data that users produce during their interaction with modern technologies is rarely visible to themselves and is used downstream without their awareness and consent. For instance, the rise of AI technologies is possible largely due to the abundance of data unwittingly generated by the public for purposes other than enabling AI models. Prominent examples include Flickr photos [12], Wikipedia articles [14], and the Common Crawl dataset consisting of publicly available webpages [11]. In many of such cases, users produce data without being aware of its value and potential, giving technology companies the opportunity to extract an enormous amount of revenue from such data.","","","","" "Jiameng Pu, Zain Sarwar, Sifat Muhammad Abdullah, Abdullah Rehman, Yoonjin Kim, Parantapa Bhattacharya, Mobin Javed, Bimal Viswanath, Virginia Tech, LUMS Pakistan – Virginia Tech, USA; University of Chicago, USA; LUMS, Pakistan, University of Virginia, USA","Deepfake Text Detection: Limitations and Opportunities","https://jmpu.github.io/files/Deepfake%20Text%20Detection%20Limitations%20and%20Opportunities_CR.pdf","papers","20220101Z00:00:00","","","Virginia Tech, USA; University of Chicago, USA; LUMS, Pakistan, University of Virginia, USA","nlp/text-classification, deep-fake-detection, misinformation, disinformation","","","","Grover-RealNews","" "Florian Hantke, Ben Stock – CISPA Helmholtz Center for Information Security, Germany","HTML Violations and Where to Find Them: A Longitudinal Analysis of Specification Violations in HTML","https://swag.cispa.saarland/papers/hantke2022violations.pdf","papers","20220101Z00:00:00","","","CISPA Helmholtz Center for Information Security, Germany","web-science, internet-security","[...] we leveraged Common Crawl [22] to analyze more than 23K popular domains over the course of eight years. [...] the crawler framework first collects meta information for each of the listed domains using Common Crawl [22] as a basis for the following analyses (1). This Common Crawl approach makes it possible to take a look into the past and analyze old versions of websites as well as current snapshots. Unlike similar crawling studies before using the Internet Archive[32], with Common Crawl, we are not limited by rate limit issues as we can request the database and S3 bucket directly. This makes the process fast and enables to analyze nearly a thousand pages per minute from one IP address over multiple days. The meta information that the framework collects contains details on where an HTML document can be found in the Common Crawl’s dumps. For each domain, the framework collects meta information from up to 100 pages and hands them to the crawler.","","","","" "Todor Markov, Chong Zhang, Sandhini Agarwal, Tyna Eloundou, Teddy Lee, Steven Adler, Angela Jiang, Lilian Weng – OpenAI","A Holistic Approach to Undesired Content Detection in the Real World","https://arxiv.org/abs/2208.03274","papers","20220101Z00:00:00","","","OpenAI","nlp/text-classification, nlp/corpus-construction, toxic content, hate speech","","","","","" "Joshua Reynolds, Adam Bates, Michael Bailey – New Mexico State University, USA; University of Illinois at Urbana-Champaign, USA; Georgia Institute of Technology, USA","Equivocal URLs: Understanding the Fragmented Space of URL Parser Implementations","https://link.springer.com/chapter/10.1007/978-3-031-17143-7_9","papers","20220101Z00:00:00","","","New Mexico State University, USA; University of Illinois at Urbana-Champaign, USA; Georgia Institute of Technology, USA","computer-security/internet-security, web-security, URL parsing","We also surveyed ∼350 million URLs sampled uniformly and randomly from the approximately 3 billion URLs in Common Crawl's January 2022 URL Index [35]. [35 Kreymer, I., Chuang, G.: Announcing the common crawl index! (2015)]","","","","" "Mehmet Korkmaz, Emre Koçyiğit, Özgür Şahingöz, Banu Diri – Yildiz Technical University, Istanbul, Turkey; Biruni University, Istanbul, Turkey","A Hybrid Phishing Detection System Using Deep Learning-based URL and Content Analysis","https://www.eejournal.ktu.lt/index.php/elt/article/download/31197/15556","papers","20220101Z00:00:00","","","Yildiz Technical University, Istanbul, Turkey; Biruni University, Istanbul, Turkey","computer-security/internet-security","","","","","" "Mohd Faizal Ab Razak, Mohd Izham Jaya, Ferda Ernawan, Ahmad Firdaus, Fajar Agung Nugroho – Universitas Dian Nuswantoro, Semarang, Indonesia","Comparative Analysis of Machine Learning Classifiers for Phishing Detection","https://ieeexplore.ieee.org/abstract/document/9930531/","papers","20220101Z00:00:00","","","Universitas Dian Nuswantoro, Semarang, Indonesia","computer-security/internet-security","… The source for this dataset is from the University Malaysia of Sarawak, compiled from PhishTank, OpenPhish, Alexa and Common Crawl. One method for detecting new phishing websites is to utilize heuristics such as the URL and CSS detection …","","","","" "L. Ranaldi, A. Nourbakhsh, F. Fallucchid, FM. Zanzotto – Guglielmo Marconi University, Roma, Italy; University of Rome Tor Vergata, Roma, Italy","C-OSINT: COVID-19 Open Source artificial INTelligence framework","https://ceur-ws.org/Vol-3260/paper16.pdf","papers","20220101Z00:00:00","","With the emergence of COVID-19 disease worldwide, a market of the products related to this disease formed across the Internet. By the time these goods were in short supply, many uncontrolled Dark Web Marketplaces (DWM) were active in selling these products. At the same time, Dark Web Forums (DWF) became proxies for spreading false ideas, fake news about COVID-19, and advertising products sold in DWMs. This study investigates the activities entertained in the DWMs and DWFs to propose a learning-based model to distinguish them from their related counterparts on the surface web. To this end, we propose a COVID-19 Open Source artificial INTelligence framework (C-OSINT) to automatically collect and classify the activities done in DWMs and DWFs. Moreover, we corporate linguistic and stylistic solutions to leverage the classification performance between the content found in DWMs and DWFs and two surface web sources. Our results show that using syntactic and stylistic representation outperforms the Transformer based results over these domains.","Guglielmo Marconi University, Roma, Italy; University of Rome Tor Vergata, Roma, Italy","nlp/transformer-language-model; web-science/dark-web","","","","","" "Shuheng Liu, Alan Ritter – Georgia Institute of Technology","Do CoNLL-2003 Named Entity Taggers Still Work Well in 2023?","https://arxiv.org/abs/2212.09747","papers","20220101Z00:00:00","","","Georgia Institute of Technology","nlp/named-entity-recognition, dataset-creation","Our dataset follows this distribution to collect Reuters news articles published between December 5th and 7th, 2020, collected from the Common Crawl Foundation³. [³http://commoncrawl.org/]","","","","" "Matyáš Boháček, Michal Bravanský, Filip Trhlík, Václav Moravec – Charles University, Prague, Czech Republic; Gymnasium of Johannes Kepler, Prague, Czech Republic; University College London, United Kingdom","Fine-grained Czech News Article Dataset: An Interdisciplinary Approach to Trustworthiness Analysis","https://arxiv.org/abs/2212.08550","papers","20220101Z00:00:00","","","Charles University, Prague, Czech Republic; Gymnasium of Johannes Kepler, Prague, Czech Republic; University College London, United Kingdom","nlp/fake-news-detection, dataset-creation","Initially, we assembled a collection of almost 94, 000 articles by scraping URLs of 45 Czech news sources obtained from Common Crawl² [²https://commoncrawl.org/]. These sources included mainstream journalistic websites, tabloids, independent news outlets, and websites that are part of the disinformation ecosystem [ 26 ], capturing the full scope of journalistic content in the Czech Republic. [...] We applied multiple filters and balancing mechanisms to mitigate deficiencies caused by inherent flaws in Common Crawl, which reduced the dataset’s size from 94, 000 to 10, 000 items. This way, we also ensured that the data is as representative of the Czech news ecosystem and as diverse as possible.","","","","" "Mehtab Khan, Alex Hanna – Yale Law School, USA; Distributed AI Research Institute","The Subjects and Stages of AI Dataset Development: A Framework for Dataset Accountability","https://ssrn.com/abstract=4217148","papers","20220101Z00:00:00","","There has been increased attention toward the datasets that are used to train and build AI technologies from the computer science and social science research communities, but less from legal scholarship. Both Large-Scale Language Datasets (LSLDs) and Large-Scale Computer Vision Datasets (LSCVDs) have been at the forefront of such discussions, due to recent controversies involving the use of facial recognition technologies, and the discussion of the use of publicly-available text for the training of massive models which generate human-like text. Many of these datasets serve as “benchmarks” to develop models that are used both in academic and industry research, while others are used solely for training models. The process of developing LSLDs and LSCVDs is complex and contextual, involving dozens of decisions about what kinds of data to collect, label, and train a model on, as well as how to make the data available to other researchers. However, little attention has been paid to mapping and consolidating the legal issues that arise at different stages of this process: when the data is being collected, after the data is used to build and evaluate models and applications, and how that data is distributed more widely. In this article, we offer four main contributions. First, we describe what kinds of objects these datasets are, how many different kinds exist, what types of modalities they encompass, and why they are important. Second, we provide more clarity about the stages of dataset development – a process that has thus far been subsumed within broader discussions about bias and discrimination – and the subjects who may be susceptible to harms at each point of development. Third, we provide a matrix of both the stages of dataset development and the subjects of dataset development, which traces the connections between stages and subjects. Fourth, we use this analysis to identify some basic legal issues that arise at the various stages in order to foster a better understanding of the dilemmas and tensions that arise at every stage. We situate our discussion within wider discussion of current debates and proposals related to algorithmic accountability. This paper fulfills an essential gap when it comes to comprehending the complicated landscape of legal issues connected to datasets and the gigantic AI models trained on them.","Yale Law School, USA; Distributed AI Research Institute","nlp/corpus-construction, dataset-creation, data-governance, privacy, legal/copyright","D. Common Crawl: Archiving the Whole Web The Common Crawl (CC) dataset is one of the most popular datasets used in the training of what have typically been called large language models. [...]","","","","" "Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, Jenia Jitsev – LAION; UC Berkeley, USA; Gentec Data; TU Darmstadt, Germany; Hessian.AI; University of Washington, Seattle, USA; Technical University of Munich, Germany; Stability AI; EleutherAI; Juelich Supercomputing Center (JSC), Germany; Research Center Juelich (FZJ), Germany","LAION-5B: An open large-scale dataset for training next generation image-text models","https://arxiv.org/abs/2210.08402","papers","20220101Z00:00:00","","","LAION; UC Berkeley, USA; Gentec Data; TU Darmstadt, Germany; Hessian.AI; University of Washington, Seattle, USA; Technical University of Munich, Germany; Stability AI; EleutherAI; Juelich Supercomputing Center (JSC), Germany; Research Center Juelich (FZJ), Germany","nlp/corpus-construction, nlp/multimodal-corpora","By starting from Common Crawl [1] and filtering this data source with an existing CLIP model, we derive a dataset consisting of three parts: 2.32 billion English image-text examples, 2.26 billion multilingual examples, and 1.27 billion examples that are not specific to a particular language (e.g., places, products, etc.). [...] To extract image-text pairs from Common Crawl, we parse the HTML IMG (image) tags from Common Crawl’s WAT metadata files.⁴ [⁴See https://commoncrawl.org/the-data/get-started/ for details of the metadata format.] Specifically, we focus on images with an alt-text so we can create image-text pair.","","LAION-5B","","" "{NLLB Team}, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Jeff Wang – Meta AI; UC Berkeley, USA; Johns Hopkins University, USA","No Language Left Behind: Scaling Human-Centered Machine Translation","https://arxiv.org/abs/2207.04672","papers","20220101Z00:00:00","","Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system. Finally, we open source all contributions described in this work, accessible at https://github.com/facebookresearch/fairseq/tree/nllb.","Meta AI; UC Berkeley, USA; Johns Hopkins University, USA","nlp/corpus-construction, nlp/parallel-corpus, nlp/low-resource-language, nlp/language-identification","We begin with web data as our starting point, provided by CommonCrawl (CC)18 and ParaCrawl (Bañón et al., 2020).","","NLLB","","" "Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zhang, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, Bryan Catanzaro – Microsoft; NVIDIA","Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model","https://arxiv.org/abs/2201.11990","papers","20220101Z00:00:00","Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences","","Microsoft; NVIDIA","nlp/language-model","Resources such as Common Crawl (CC) provide snapshots of the web which can be utilized as a source of language data. While these data sources contain an enormous amount of language data, they also require carefully designed preprocessing steps in order to select data which is of reasonable quality. As prior work has found (e.g., [9]), the quality of unfiltered Common Crawl data is lower than that of curated datasets and steps should be taken to increase the average quality of data selected from Common Crawl for LM pretraining. [...] Common Crawl: As mentioned previously, Common Crawl comprises an immense amount of data. We chose to process two snapshots, 2020-50 and 2021-04, with the aim of acquiring around 150B tokens of training data. The first step of this process is language detection [11] and text extraction from the raw HTML included in the Common Crawl WARC files¹. Following the rationale presented in [11], we used the pycld2² and jusText³ libraries for these tasks. [...] In addition to Common Crawl data, we leveraged a number of other previously generated datasets. From The Pile, we selected Books3, OpenWebText2, Stack Exchange, PubMed Abstracts, Wikipedia, Gutenberg (PG-19), BookCorpus2, NIH ExPorter, and Pile-CC datasets. We also included the CC-Stories and RealNews datasets used to train Megatron [63].","","","","" "Tom Alby, Robert Jäschke – Humboldt-Universität zu Berlin, Berlin, Germany","Analyzing the Web: Are Top Websites Lists a Good Choice for Research?","https://link.springer.com/chapter/10.1007/978-3-031-16802-4_2","papers","20220101Z00:00:00","","The web has been a subject of research since its beginning, but it is difficult if not impossible to analyze the whole web, even if a database of all URLs would be freely accessible. Hundreds of studies have used commercial top websites lists as a shortcut, in particular the Alexa One Million Top Sites list. However, apart from the fact that Amazon decided to terminate Alexa, we question the usefulness of such lists for research as they have several shortcomings. Our analysis shows that top sites lists miss frequently visited websites and offer only little value for language-specific research. We present a heuristic-driven alternative based on the Common Crawl host-level web graph while also taking language-specific requirements into account.","Humboldt-Universität zu Berlin, Berlin, Germany","web-science, domain-ranking","","hyperlinkgraph/cc-main-2021-feb-apr-may/hostgraph","","","" "Olexandra Belz – Ivan Franko National University of Lviv, Ukraine","Use of schema.org micro-markup in e-commerce projects","http://baltijapublishing.lv/index.php/threeseas/article/view/1964/1973","papers","20220101Z00:00:00","","The purpose of the article is to identify the most effective schema.org micro-markup schemes used in e-commerce projects. Methodology. The research included competitive intelligence among the leading online platforms operating in Europe in general and in Ukraine in particular. The study involved TOP-8 e-commerce projects in Ukraine and TOP-9 global cross-border marketplaces operating in Europe. The service validator.schema.org was chosen as the research tool. Results. The study showed that the most popular schema.org micro-markup format is JSON-LD. In general, 82.4% of the surveyed sites use JSON-LD microdata format. Some sites use two microdata formats: JSON-LD and Microdata. But none of the top online marketplaces use the RDFa micro-markup format. Popular marketplaces operating in Ukraine and Europe often use the same types of schema.org vocabulary. However, the frequency of using micro-markup by top marketplaces operating in Ukraine is much higher than the frequency of using micro-markup by top marketplaces operating in Europe. In addition, Ukrainian marketplaces use a much wider list of schema.org micro-markup properties than marketplaces operating in Europe. However, no online store has implemented the properties of advantages and disadvantages of goods recommended by Google in the scheme. Practical implications. The study suggests schema.org micro-markup schemes for homepage, category page, product page, about page, payment and delivery page, warranty and returns page, contact page and blog. The proposed templates of micro-markup schemes were validated using the validator.schema.org service. The study recommends using the JSON-LD format for semantic markup of website content. Value/originality. Implementation of effective semantic markup of site content will allow search engines to more accurately identify the information presented on the site. This, in turn, will improve the visibility of the online marketplace in the Search Engine Results Page of Google, Bing, Yahoo! etc.","Ivan Franko National University of Lviv, Ukraine","e-commerce, online marketplaces, linked data, schema.org annotations, SEO","Since 2008, the Common Crawl project has been crawling websites to collect web page data (extracting metadata and web page text). At the time of writing, the latest scan took place from November 26 to December 10, 2022. As a result of this scan, 3.35 billion web pages were processed and 420 petabytes of content were removed (Common Crawl, 2022). Both scientists and practitioners are working with the obtained data sets of the Common Crawl project.¶ On September 22, 2022, the Web Data Commons (WDC) project released the Schema.org Table Annotation Benchmark (SOTAB) for public download (Web Data Commons, 2022).","","","WebDataCommons","" "Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, Saehoon Kim – Kakao Brain, South Korea","Coyo-700m: Image-text pair dataset","https://github.com/kakaobrain/coyo-dataset","papers","20220101Z00:00:00","","We collected about 10 billion pairs of alt-text and image source in HTML documents in Common Crawl from Oct. 2020 to Aug. 2021. and eliminated uninformative pairs through the image and text level filtering process with minimal cost. The following figure outlines our data collection procedure.","Kakao Brain, South Korea","nlp/multimodal-corpora","","five CommonCrawl dumps, ranging from 2017 to 2020","COYO-700M","","" "Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, Quoc Le – Google","LaMDA: Language Models for Dialog Applications","https://arxiv.org/abs/2201.08239","papers","20220101Z00:00:00","","","Google","nlp/language-model, nlp/transformer-language-model","E Pre-training data composition¶ The pre-training data, called Infiniset, is a combination of dialog data from public dialog data and other public web documents. It consists of 2.97B documents and 1.12B dialogs with 13.39B utterances. The composition of the data is as follows: 50% dialogs data from public forums; 12.5% C4 data [11]; 12.5% code documents from sites related to programming like Q&A sites, tutorials, etc; 12.5% Wikipedia (English); 6.25% English web documents; and 6.25% Non-English web documents. The total number of words in the dataset is 1.56T. Note that this composition was chosen to achieve a more robust performance on dialog tasks (Section 4) while still keeping its ability to perform other tasks like code generation. As future work, we can study how the choice of this composition may affect the quality of some of the other NLP tasks performed by the model.","","","Tensorflow-C4","" "Mark Edward Phillips, Sawood Alam – University of North Texas, USA; Internet Archive, USA","Moving the End of Term Web Archive to the Cloud to Encourage Research Use and Reuse","https://digital.library.unt.edu/ark:/67531/metadc1998717/m2/1/high_res_d/EOT_WADL_2022.pdf","papers","20220101Z00:00:00","","The End of Term Web (EOT) Archive is a collaborative project with a goal of collecting the United States federal web, loosely defined as .gov and .mil, every four years coinciding with presidential elections and often a transition in the Executive Branch of the government. In 2021 the End of Term team began to process the longitudinal web archive for EOT-2008, EOT-2012, EOT-2016, and EOT-2020 to move into the Amazon S3 storage service as part of the Amazon Open Data Program. This effort adopted tools, structures, and documentation developed by Common Crawl in an effort to maximize potential research access and reuse of existing tools and documentation. This paper presents the process of organizing, staging, processing, and moving these collections into the Amazon cloud.","University of North Texas, USA; Internet Archive, USA","web archive","","","","","" "Gilles Adda, Annelies Braffort, Ioana Vasilescu, François Yvon – Université Paris-Saclay, CNRS, LISN, Paris, France","Deliverable D1.14 Report on the French Language. European Language Equality (ELE); EU project no. LC- 01641480 – 101018166","https://european-language-equality.eu/wp-content/uploads/2022/03/ELE___Deliverable_D1_14__Language_Report_French_.pdf","papers","20220101Z00:00:00","","","Université Paris-Saclay, CNRS, LISN, Paris, France","nlp/resources, French, nlp/language-models, nlp/text-corpora","The CommonCrawl project³⁷ [³⁷https://commoncrawl.org/] aggregates Web crawled data that is orders or magnitude larger than these resources for many languages; furthermore this corpus is being updated on a regular basis. By using parts of the French subset of CommonCrawl, possibly conjoined with the more curated corpora alluded to above has enabled to train large-scale BERT-style Language Models (LMs) – FlauBERT (Le et al., 2020) is built with a corpus containing about 12B running words, CamemBERT (Martin et al., 2020) uses the 22B words OSCAR, and these numbers continue to grow, albeit at a much slower pace than the corresponding English cor- pora.","","","","" "Asadullah Safi, Satwinder Singh – Nangarhar University, Afghanistan; Central University of Punjab, Bathinda, Punjab, India","A Systematic Literature Review on Phishing Website Detection Techniques","https://www.sciencedirect.com/science/article/pii/S1319157823000034","papers","20230101Z00:00:00","Phishing, Phishing Detection, Deep Learning, Cyber Security, Machine Learning","Phishing is a fraud attempt in which an attacker acts as a trusted person or entity to obtain sensitive information from an internet user. In this Systematic Literature Survey (SLR), different phishing detection approaches, namely Lists Based, Visual Similarity, Heuristic, Machine Learning, and Deep Learning based techniques, are studied and compared. For this purpose, several algorithms, data sets, and techniques for phishing website detection are revealed with the proposed research questions. A systematic Literature survey was conducted on 80 scientific papers published in the last five years in research journals, conferences, leading workshops, the thesis of researchers, book chapters, and from high-rank websites. The work carried out in this study is an update in the previous systematic literature surveys with more focus on the latest trends in phishing detection techniques. This study enhances readers' understanding of different types of phishing website detection techniques, the data sets used, and the comparative performance of algorithms used. Machine Learning techniques have been applied the most, i.e., 57 as per studies, according to the SLR. In addition, the survey revealed that while gathering the data sets, researchers primarily accessed two sources: 53 studies accessed the PhishTank website (53 for the phishing data set) and 29 studies used Alexa's website for downloading legitimate data sets. Also, as per the literature survey, most studies used Machine Learning techniques; 31 used Random Forest Classifier. Finally, as per different studies, Convolution Neural Network (CNN) achieved the highest Accuracy, 99.98%, for detecting phishing websites.","Nangarhar University, Afghanistan; Central University of Punjab, Bathinda, Punjab, India","computer-security/internet-security, web-security","[phishing website detection research relying] Common Crawl (Rao et al., 2019); (Rashid et al., 2020) ; (Geyik et al., 2021) ; (Korkmaz and Sahingoz, 2020) ; (Chiew et al., 2019) ; (Feng and Yue, 2020) ; (Wei et al., 2020)","","","","" "Josh A. Goldstein, Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel, Katerina Sedova – Georgetown University’s Center for Security and Emerging Technology, USA; OpenAI; Stanford Internet Observatory, USA","Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations","https://arxiv.org/abs/2301.04246","papers","20230101Z00:00:00","Computers and Society (cs.CY), FOS: Computer and information sciences, FOS: Computer and information sciences","","Georgetown University’s Center for Security and Emerging Technology, USA; OpenAI; Stanford Internet Observatory, USA","nlp/generative-language-models, ai/ethics-of-machine-learning, cc-cited-not-used","While some of this data is typically taken from relatively structured sources such as Wikipedia, a large majority of data usually comes from tools like Common Crawl that scrape the web for publicly available text.¹⁴⁷ [147. CommonCrawl freely publishes its archives of web data. See “So you’re ready to get started.,” Common Crawl, accessed June 27, 2022, https://commoncrawl.org/the-data/get-started/. But anyone can build their own software for web scraping or use other tools to extract data from websites.]","","","","" "Xinyue Wang – Virginia Tech, USA","Large Web Archive Collection Infrastructure and Services","http://hdl.handle.net/10919/113345","papers","20230101Z00:00:00","","The web has evolved to be the primary carrier of human knowledge during the information age. The ephemeral nature of much web content makes web knowledge preservation vital in preserving human knowledge and memories. Web archives are created to preserve the current web and make it available for future reuse. In addition to its preservation purpose, web archive data is also used as a source for research and for lost information discovery. However, the reuse of web archive data is inherently challenging because of the scale of data size and requirements of big data tools to serve and analyze web archive data efficiently. In this research, we propose to build a web archive big data processing infrastructure that can support efficient and scalable web archive reuse like quantitative data analysis and browsing services. We adopt industry frameworks and tools to establish a platform that can provide high-performance computation for web archive initiatives and users. We propose to convert the standard web archive data file format to a columnar data format for efficient future reuse. Our experiments show that our proposed design can significantly improve quantitative data analysis tasks for common web archive data usage. Our design can also serve an efficient web browsing service without adopting a sophisticated web hosting architecture. In addition to the standard web archive data, we also integrate Twitter data into our design as a unique web archive resource. Twitter is a prominent source of data for researchers in a variety of fields and an integral element of the web's history. We aggregate the Twitter data from different sources and integrate it into the suggested design for reuse. We are able to greatly increase the processing performance of workloads around social media data by overcoming the data loading bottleneck with a web-archive-like Parquet data format.","Virginia Tech, USA","web-archiving, data formats, big data, data processing, WARC, Parquet, CDX","We use Common Crawl’s web archiving data crawled from May 20 to 23, 2018. The data set consists of 1219 Gzip compressed WARC files totaling 0.98 TB, and contains 53,324,440 records. The WARC files are organized by crawling time, each containing records crawled from a mutually exclusive time span. We then reformat the WARC files to yield the following five datasets for comparison: 1) the original WARC files; 2) case 1 plus CDX index files built against all the original WARC files; 3) Parquet files containing the same information as case 1, with most columns in String type; 4) the same as case 3 but the Timestamp column in INT64 Timestamp type; 5) Avro, [...]","","","","" "Petros Terzis – University College London, United Kingdom","Building Programmable Commons","https://osf.io/preprints/socarxiv/yuef5/","papers","20230101Z00:00:00","","","University College London, United Kingdom","digital-commons, public-commons, cc-cited-not-used","Programmable commons and the public value of programmability are thus introduced as parts of a broader political project that aspires to democratise access to, and management of these resources. By drawing on the history of a family of commons -namely intellectual commons, infrastructure commons, and global commons-, this paper explores the material form and impact of infocomputational technologies and presents a blend of bottom-up and top-down initiatives for their commons-based organisation and governance.","","","","" "Hans W. A. Hanley, Deepak Kumar, Zakir Durumeric – Stanford University, USA","A Golden Age: Conspiracy Theories' Relationship with Misinformation Outlets, News Media, and the Wider Internet","https://arxiv.org/abs/2301.10880","papers","20230101Z00:00:00","","Do we live in a {""}Golden Age of Conspiracy Theories?{""} In the last few decades, conspiracy theories have proliferated on the Internet with some having dangerous real-world consequences. A large contingent of those who participated in the January 6th attack on the US Capitol believed fervently in the QAnon conspiracy theory. In this work, we study the relationships amongst five prominent conspiracy theories (QAnon, COVID, UFO/Aliens, 9-11, and Flat-Earth) and each of their respective relationships to the news media, both mainstream and fringe. Identifying and publishing a set of 755 different conspiracy theory websites dedicated to our five conspiracy theories, we find that each set often hyperlinks to the same external domains, with COVID and QAnon conspiracy theory websites largest amount of shared connections. Examining the role of news media, we further find that not only do outlets known for spreading misinformation hyperlink to our set of conspiracy theory websites more often than mainstream websites but this hyperlinking has increased dramatically between 2018 and 2021, with the advent of QAnon and the start of COVID-19 pandemic. Using partial Granger-causality, we uncover several positive correlative relationships between the hyperlinks from misinformation websites and the popularity of conspiracy theory websites, suggesting the prominent role that misinformation news outlets play in popularizing many conspiracy theories.","Stanford University, USA","nlp/fake-news-detection, misinformation, disinformation, conspiracy theories, web-science/hyperlinkgraph","Using our own web scrapes and pages historically scraped by Common Crawl,¹ [¹https://commoncrawl.org/] we then document the state and the changing behaviors of the conspiracy theory ecosystem and their relationship to a separate set of 530 known misinformation outlets, 565 authentic news websites, and 528 non-news websites. [...] Utilizing the Common Crawl harmonic and PageRank centrality measures that measure a website’s centrality across all of the crawled Internet, we then find many of the websites in our dataset have relatively high network centrality, suggesting that many of them are not peripheral on the Internet but actually near the Internet’s core/are mainstream. Indeed examining, the hyperlink connections between news media and these conspiracy theories, we find that many of them rely heavily on mainstream as well as misinformation outlets (compared to non-news websites) for their information, with many popular misinformation outlets also hyperlinking back to many of these conspiracy theory websites. [...] 4.1 Common Crawl Page Retrieval and Website Crawling To gather the set of hyperlinks between our websites, we utilize Common Crawl data [92]—widely considered the most complete publicly available source of web crawl data—and our own website crawls. For each website in our dataset, we collect all the domain’s HTML pages that were indexed by Common Crawl before August 2021. In addition to Common Crawl data, we further utilize our own website scrapes. We utilize our own crawls, in addition to Common Crawl, due to noisiness, missing pages, and missing domains within the Common Crawl dataset [85]. For example, 309 particularly small conspiracy theory domains were not contained within the Common Crawl dataset (i.e. these websites often only contained a few dozen pages). Thus for each website in our dataset, we further gather all the HTML pages 10 hops from each website’s homepage (i.e., we collect all URLs linked from the homepage (1st hop), then all URLs linked from the pages that were linked by the homepage (2nd hop), and so forth). For each HTML page from our scrapes and Common Crawl, we parse the HTML, detect the date that page was published, and collect hyperlinks to other pages (i.e., HTML <a> tags). Altogether we gather the available Common Crawl pages and scrape the HTML for our 755 conspiracy theory, 530 misinformation, 565 authentic news, and 528 non-news websites. [...] Utilizing Common Crawl network data [ 61] over the indexed Internet (87.7 million websites), we thus determine the network centrality of our set of conspiracy-focused websites to understand if each conspiracy theory website category is “core” (regularly utilized on the Internet) or “peripheral”. We utilize centralities across Common Crawl’s dataset rather than our partial one in order to get a sense of each conspiracy theory’s centrality on the entire Internet. While only 446 of our conspiracy theory websites are within the Common Crawl dataset, this analysis allows us to fully understand the relative roles that each conspiracy theory website group in our dataset plays on the wider Internet.","","","","" "Ralph Peeters, Reng Chiz Der, Christian Bizer – University of Mannheim, Germany","WDC Products: A Multi-Dimensional Entity Matching Benchmark","https://arxiv.org/abs/2301.09521","papers","20230101Z00:00:00","","","University of Mannheim, Germany","semantic-web, semantic-web/microformats, e-commerce, linked data, schema.org annotations","The first step of the pipeline is the extraction of large amounts of product offers from the Common Crawl⁴ [⁴https://commoncrawl.org/] using schema.org annotations. Some product offers contain product identifiers like MPNs and GTINs which allow us to group offers into [...] The Web Data Commons6 project regularly extracts schema.org annotations from the Common Crawl, the largest web corpus available to the public, in order to monitor the adoption of semantic annotations on the Web and to provide the extracted data for public download. The WDC Products benchmark uses product offers from the WDC Product Data Corpus V2020 (PDC2020)7. The corpus was created by extracting schema.org product data from the September 2020 version of the Common Crawl. The extracted data goes through a pipeline of cleansing steps such as removing offers from listing pages as well as advertisements that are contained in a page in addition to the main offer [31]. The resulting PDC2020 corpus consists of ∼98 million product offers originating from 603,000 websites.","CC-MAIN-2020-40","","","" "Xavier Amatriain – amatriain.net","Transformer models: an introduction and catalog","https://arxiv.org/abs/2302.07730","papers","20230101Z00:00:00","","","amatriain.net","nlp/language-model, nlp/transformer-language-model, nlp/multi-modal-language-model","","","","","" "Nicholas Carlini, Matthew Jagielski, Christopher A. Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum Anderson, Andreas Terzis, Kurt Thomas, Florian Tramèr – Google; ETH Zurich, Switzerland; NVIDIA; Robust Intelligence","Poisoning Web-Scale Training Datasets is Practical","https://arxiv.org/abs/2302.10149","papers","20230101Z00:00:00","Cryptography and Security (cs.CR), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences","Deep learning models are often trained on distributed, webscale datasets crawled from the internet. In this paper, we introduce two new dataset poisoning attacks that intentionally introduce malicious examples to a model's performance. Our attacks are immediately practical and could, today, poison 10 popular datasets. Our first attack, split-view poisoning, exploits the mutable nature of internet content to ensure a dataset annotator's initial view of the dataset differs from the view downloaded by subsequent clients. By exploiting specific invalid trust assumptions, we show how we could have poisoned 0.01% of the LAION-400M or COYO-700M datasets for just $60 USD. Our second attack, frontrunning poisoning, targets web-scale datasets that periodically snapshot crowd-sourced content -- such as Wikipedia -- where an attacker only needs a time-limited window to inject malicious examples. In light of both attacks, we notify the maintainers of each affected dataset and recommended several low-overhead defenses.","Google; ETH Zurich, Switzerland; NVIDIA; Robust Intelligence","nlp/corpus-construction, computer-security, nlp/language-model, nlp/transformer-language-model, nlp/multi-modal-language-model","B.3 Common Crawl Common Crawl is a petabyte-scale corpus of web crawl data that is repeatedly captured on a roughly monthly basis. Each archive is a complete re-crawl of the internet that records the full activity, including all requests of the crawler and the host responses—with both HTTP headers and content. As such, each archive contains a static snapshot of all crawled pages at the time of visit. This may include new page content not seen during a previous crawl, and may exclude content that has become stale since the previous crawl. For example, data crawled during September 24 through October 8, 2022 contains 3.15 billion web pages with 380 TiB of uncompressed content from 34 million registered domains—1.3 billion URLs were not visited in any of the prior crawls.¹⁴ The Common Crawl dataset is vulnerable to an attack which is similar to both our frontrunning and split-view poisoning attacks. The adversary can purchase an expired domain which was previously contained in the Common Crawl, and it will be re-crawled with the adversary’s choice of content, which will then appear in subsequent Common Crawl snap- shots. Notice that, differently from the snapshot-poisoning attack on Wikipedia, there is no content moderation here and so the adversary simply needs to continue to control the domain to poison all future Common Crawl snapshots. Buying recently-expired domains that existed in previous Common Crawl snapshots allows a stronger form of attack where the attack can inject entirely new links into the crawl. This can be accomplished by adding links or subdomains to poisoned domains, and allowing the crawler to discover the new poisoned domains. Thus, an adversary may inject arbitrarily many pages into the Common Crawl dataset, not only from the originally expired subset. We do not implement this attack following our ethics statements outlined earlier. Since Common Crawl WARC files have been hosted by Amazon on a AWS Athena (serverless service)¹⁵, domain reconnaissance work to analyze URLs is inexpensive. Scanning through 10 years of Common Crawl data to analyze domains from popular TLDs and high number of Common Crawl entries cost us USD$ 0.84. While additional analysis might somewhat increase this cost, it remains an inexpensive way to search for vulnerable domains. Buying recently expired domains, or domains that have a dangling DNS record with an active IP address is preferred, as domains that failed to return a 200-OK status in consecutive crawls seem to be moved to a lower priority. For example, among expired domains we purchased, just one domain accounts for more than 90% of all status codes among the purchased domains, while other domains we purchased as early as 12/20/2020 have seen relatively less scraping traffic across a 3 year period.16 Because Common Crawl is enormous and uncurated (to accurately reflect the content of the internet) poisoning all of Common Crawl is impractical due to size. Additionally, it is not always apparent how consumers of this data are process- ing it for downstream machine learning tasks. However, there exist many derivative datasets which are constructed by curating a relevant subset of the Common Crawl. This includes the LAION-5B image dataset [57], the text dataset known as the Pile [23], the multilingual text dataset CC-100 [78], and the CCMatrix dataset [61], a translation dataset of pairs of translated sentences. Such curation actually amplifies the power of an attack: an attack which adds 1MB of text to the Common Crawl would be poisoning a 2.5 · 10−9 fraction of the Common Crawl, but if this text bypasses the curation done for the CC-100 dataset, it could instead poison a 1.2 · 10−5 fraction of the English corpus, or even a full 9.1% of the Oromo corpus.","","","","" "Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, Furu Wei – Microsoft","Language Is Not All You Need: Aligning Perception with Language Models","https://arxiv.org/abs/2302.14045","papers","20230101Z00:00:00","","","Microsoft","nlp/language-model, nlp/transformer-language-model, nlp/multi-modal-language-model","Text Corpora We train our model with The Pile [GBB+20] and Common Crawl (CC). The Pile is a massive English text dataset built for training large-scale language models, which is produced from a variety of data sources. We exclude data splits from GitHub, arXiv, Stack Exchange, and PubMed Central. We also include the Common Crawl snapshots (2020-50 and 2021-04) datasets, CC-Stories, and RealNews datasets [SPP+19 , SPN+22]. The entire datasets have been purged of duplicate and near-duplicate documents, as well as filtered to exclude downstream task data. Refer to Appendix B.1.1 for detailed descriptions of training text corpora.¶ Image-Caption Pairs The image-caption pairs are constructed from several datasets, including English LAION-2B [ SBV+22 ], LAION-400M [ SVB+21], COYO-700M [BPK+22 ], and Conceptual Captions [ SDGS18, CSDS21]. English LAION-2B, LAION-400M, and COYO-700M are collected from web pages of the Common Crawl web data by extracting image sources and the corresponding alt-text. Conceptual Captions are also from internet web pages. More details can be found in Appendix B.1.2. ¶ Interleaved Image-Text Data We collect interleaved multimodal data from the Common Crawl snapshot, which is a publicly available archive of web pages. We use a filtering process to select about 71M web pages from the original 2B web pages in the snapshot. We then extract the text and images from the HTML of each selected web page. For each document, we limit the number of images to five to reduce noise and redundancy. We also randomly discard half of the documents that only have one image to increase the diversity. We provide more details about the data collection process in Appendix B.1.3. By using this corpus, we enable KOSMOS-1 to handle interleaved text and image and improve its few-shot ability.","CC-MAIN-2020-50, CC-MAIN-2021-04","","The-Pile-English, CC-Stories, RealNews, LAION-400M, LAION-2B, COYO-700M","" "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample – Meta AI","LLaMA: Open and Efficient Foundation Language Models","https://arxiv.org/abs/2302.13971","papers","20230101Z00:00:00","Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences","We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.","Meta AI","nlp/language-model, nlp/transformer-language-model, nlp/multi-modal-language-model","English CommonCrawl [67%]. We preprocess five CommonCrawl dumps, ranging from 2017 to 2020, with the CCNet pipeline (Wenzek et al., 2020). This process deduplicates the data at the line level, performs language identification with a fastText linear classifier to remove non-English pages and filters low quality content with an n-gram language model. In addition, we trained a linear model to classify pages used as references in Wikipedia v.s. randomly sampled pages, and discarded pages not classified as references.","five CommonCrawl dumps, ranging from 2017 to 2020","Tensorflow-C4","","" "Khaled Ammar – University of Waterloo, Ontario, Canada","Systems and Algorithms for Dynamic Graph Processing","https://uwspace.uwaterloo.ca/bitstream/handle/10012/19195/Ammar_Khaled.pdf","papers","20230101Z00:00:00","","","University of Waterloo, Ontario, Canada","graph-processing, web-science/hyperlinkgraph","Common Crawl experiments. Sixteen machines load 64 billion edges, index them, and track motifs in 20 batches of 10K random edge changes.","","","WDC-hyperlinkgraph, WDC-hyperlinkgraph (2014)","" "Saffron Huang, Divya Siddarth – Collective Intelligence Project (cip.org)","Generative AI and the Digital Commons","https://arxiv.org/pdf/2303.11074.pdf","papers","20230101Z00:00:00","","","Collective Intelligence Project (cip.org)","digital-commons, public-commons, nlp/corpus-construction, nlp/language-models, nlp/generative-language-models, cc-cited-not-used","GFMs are trained on the digital commons. Generative foundation models leverage large databases of scraped information (text, code, images) from the internet to train highly capable models. This depends on the availability of public, scrapable data and leverages the “collective intelligence” of humanity, including the painstakingly edited Wikipedia, millennia’s worth of books, billions of Reddit comments, hundreds of terabytes’ worth of images, and more³ [³LAION-5B, which Stable Diffusion is trained on, has 5 billion text-image pairs (Schuhmann et al., 2022).The Pile has 100+GB of books (Gao et al., 2020)]. They also rely on non- profits like Common Crawl (which build and maintain open repositories of web crawl data), Creative Commons (for open licenses for the data used), open source libraries, and other digital infrastructure. They also take advantage of aggregated user preferences; e.g. the WebText dataset underlying the GPT family of models uses Reddit “karma scores” to select content for inclusion. All of this is common digital information and infrastructure that many people contribute to.","","","","" "Alan Chan, Herbie Bradley, Nitarshan Rajkumar – University of Cambridge, United Kingdom; Mila, Université de Montréal, Canada; EleutherAI","Reclaiming the Digital Commons: A Public Data Trust for Training Data","https://arxiv.org/abs/2303.09001","papers","20230101Z00:00:00","","Democratization of AI means not only that people can freely use AI, but also that people can collectively decide how AI is to be used. In particular, collective decision-making power is required to redress the negative externalities from the development of increasingly advanced AI systems, including degradation of the digital commons and unemployment from automation. The rapid pace of AI development and deployment currently leaves little room for this power. Monopolized in the hands of private corporations, the development of the most capable foundation models has proceeded largely without public input. There is currently no implemented mechanism for ensuring that the economic value generated by such models is redistributed to account for their negative externalities. The citizens that have generated the data necessary to train models do not have input on how their data are to be used. In this work, we propose that a public data trust assert control over training data for foundation models. In particular, this trust should scrape the internet as a digital commons, to license to commercial model developers for a percentage cut of revenues from deployment. First, we argue in detail for the existence of such a trust. We also discuss feasibility and potential risks. Second, we detail a number of ways for a data trust to incentivize model developers to use training data only from the trust. We propose a mix of verification mechanisms, potential regulatory action, and positive incentives. We conclude by highlighting other potential benefits of our proposed data trust and connecting our work to ongoing efforts in data and compute governance.","University of Cambridge, United Kingdom; Mila, Université de Montréal, Canada; EleutherAI","digital-commons, public-commons, nlp/corpus-construction, nlp/language-models, nlp/generative-language-models, cc-cited-not-used","The data trust could also start from existing efforts, such as the Common Crawl.","","","","" "Michał Turski, Tomasz Stanisławek, Karol Kaczmarek, Paweł Dyda, Filip Graliński – Snowflake; Adam Mickiewicz University, Poznań, Poland","CCpdf: Building a High Quality Corpus for Visually Rich Documents from Web Crawl Data","https://arxiv.org/pdf/2304.14953.pdf","papers","20230101Z00:00:00","","In recent years, the field of document understanding has progressed a lot. A significant part of this progress has been possible thanks to the use of language models pretrained on large amounts of documents. However, pretraining corpora used in the domain of document understanding are single domain, monolingual, or nonpublic. Our goal in this paper is to propose an efficient pipeline for creating a big-scale, diverse, multilingual corpus of PDF files from all over the Internet using Common Crawl, as PDF files are the most canonical types of documents as considered in document understanding. We analysed extensively all of the steps of the pipeline and proposed a solution which is a trade-off between data quality and processing time. We also share a CCpdf corpus in a form or an index of PDF files along with a script for downloading them, which produces a collection useful for language model pretraining. The dataset and tools published with this paper offer researchers the opportunity to develop even better multilingual language models.","Snowflake; Adam Mickiewicz University, Poznań, Poland","nlp/language-models, nlp/corpus-construction, document understanding, PDF","As our input we used web indexes created by Common Crawl. [...] They crawl webpages and save them into crawls dumps. A crawl dump contains billions of webpages (hundreds of terabytes of uncompressed data) and a new dump has been published nearly every month since March 2014. Some earlier, more irregular dumps starting from 2008 are also available.¹¹ Each dump also contains an index of the crawled pages. We decided to simply use the latest (and the largest) dump available at the time of writing this paper — the May 2022 dump.¹² [¹²https://commoncrawl.org/2022/06/may-2022-crawl-archive-now-available/] It contains 3.45 billion web pages, which amounts to 462 TB of uncompressed content. It would obviously be possible to apply the extraction procedure described in this paper to all crawls to obtain an even larger collection of PDFs, which would also allow for a diachronic analysis, but we wanted to focus on the most recent documents. Note that dumps contain only files considered as text files by the Common Crawl web robot. Mostly these are web pages in the HTML format, but, fortunately, PDFs are also treated as text files, being derivative of the PostScript page description language. This is not the case with, for instance, images, Excel files, DOCX files. Consequently, such files cannot be amassed using the methods described in the aforementioned papers.¶ 3.2 PDF links extraction¶ We experimented with two methods for extracting links to PDF files (step 1 in Figure 1):¶ 1. using CDX files, i.e., index server files provided by Common Crawl;¶ 2. looking for links to PDF files in WARC, i.e., raw crawl data files.¶ The first method is simpler, as CDX files are easy to download and take up only 225 GB in total. The second method might yield more links to PDF files, but:¶ – it is impossible for us to download all WARCs. Only a limited number of them can be processed, though still a significant number of PDF links can be added even if a small percentage of all WARC files are processed,¶ – there is lower probability that the file linked is available at all, be it in the crawl dump or simply at the original address.¶ In CDX files, the MIME type of a captured file is specified, and we limited ourselves to the application/pdf type.¶ Hence, in this paper, we focus on the first method, which allows to speed up the whole processing pipeline.","CC-MAIN-2022-21 (CDX)","","","" "Sadia Nourin, Van Tran, Xi Jiang, Kevin Bock, Nick Feamster, Nguyen Phong Hoang, Dave Levin – University of Maryland, USA; University of Chicago, USA","Measuring and Evading Turkmenistan’s Internet Censorship: A Case Study in Large-Scale Measurements of a Low-Penetration Country","https://doi.org/10.1145/3543507.3583189","papers","20230101Z00:00:00","Censorship Measurement, Web Filtering, Turkmenistan","Since 2006, Turkmenistan has been listed as one of the few Internet enemies by Reporters without Borders due to its extensively censored Internet and strictly regulated information control policies. Existing reports of filtering in Turkmenistan rely on a handful of vantage points or test a small number of websites. Yet, the country’s poor Internet adoption rates and small population can make more comprehensive measurement challenging. With a population of only six million people and an Internet penetration rate of only 38%, it is challenging to either recruit in-country volunteers or obtain vantage points to conduct remote network measurements at scale. We present the largest measurement study to date of Turkmenistan’s Web censorship. To do so, we developed TMC, which tests the blocking status of millions of domains across the three foundational protocols of the Web (DNS, HTTP, and HTTPS). Importantly, TMC does not require access to vantage points in the country. We apply TMC to 15.5M domains, our results reveal that Turkmenistan censors more than 122K domains, using different blocklists for each protocol. We also reverse-engineer these censored domains, identifying 6K over-blocking rules causing incidental filtering of more than 5.4M domains. Finally, we use , an open-source censorship evasion tool, to discover five new censorship evasion strategies that can defeat Turkmenistan’s censorship at both transport and application layers. We will publicly release both the data collected by TMC and the code for censorship evasion.","University of Maryland, USA; University of Chicago, USA","web-filtering, internet-censorship","[...] the payload of our probes contains domains curated from the Citizen Lab lists [5], the full Tranco list [42], and Common Crawl Project [8]. Due to limited resources of our VPS, we opt to probe the frst 10M FQDNs ranked by the Common Crawl Project instead of the full list of almost 400M FQDNs. [...] We scanned all regular expressions that TMC discovered against all FQDNs that we could obtain from DNS zone fles provided via ICANN’s Centralized Zone Data Service [ 6] and the full host list from the Common Crawl Project [8], totaling 718M FQDNs.","hyperlinkgraph","","","" "Wanrong Zhu, Jack Hessel, Anas Awadalla, Samir Yitzhak Gadre, Jesse Dodge, Alex Fang, Youngjae Yu, Ludwig Schmidt, William Yang Wang, Yejin Choi – Allen Institute for Artificial Intelligence, USA; University of California, Santa Barbara, USA; Paul G. Allen School of Computer Science, University of Washington, USA; Columbia University, USA; Yonsei University, South Korea; LAION","Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved With Text","https://arxiv.org/abs/2304.06939","papers","20230101Z00:00:00","","","Allen Institute for Artificial Intelligence, USA; University of California, Santa Barbara, USA; Paul G. Allen School of Computer Science, University of Washington, USA; Columbia University, USA; Yonsei University, South Korea; LAION","nlp/corpus-construction, nlp/multimodal-corpora, ai/image-text-alignment","Multimodal C4 is an expansion of the text-only c4 dataset [21], which was created by taking the April 2019 snapshot from Common Crawl4 and applying several filters with the intention of retaining high-quality, natural English text. Each document in c4 consists of the text scraped from one URL. [...] e built the mmc4 dataset on top of c4 because: 1) c4 is a web-scale dataset widely adopted as a pre-training corpus [21 , 25, 9 , 29, 27 ]; 2) c4 is constructed from web pages, which frequently contain multimedia content like images: a multimodal sequence version is a natural extension; and 3) c4-en,5 the specific underlying subset from which we construct mmc4 has already been processed with several data-cleaning steps (including English- language identification by langdetect6 with at least 0.99 confidence; text deduplication removing duplicate three-sentence spans + placeholder text like “lorem ipsum{""}; and removal of any document containing any word on the “List of Dirty, Naughty, Obscene or Otherwise Bad Words”).7 See [ 21] for more information about the text-only c4. Importantly, by building on the popular text-only c4, prior text-only documentation efforts [ 11] can provide insight about potential biases and risks that could arise when training on our multimodal extension. We use the NLTK [4] sentence tokenizer to chunk each c4 document into a list of sentences.¶ Gathering images. We first retrieve the original webpages for each document in the c4-en dataset from the Common Crawl version 2019-18, which is the default version for c4. Next, we extract the URLs for downloadable images from the raw WAT files. We restrict the image extension to either png/jpeg/jpg, and exclude image URLs that contain the following tokens: tlogo, button, icon, plugin, widgetu. We attempt to download from these URLs, and resize images to a maximum dimension of 800px. We eliminate any c4 documents that do not contain valid, downloadable images at the time of collection (mid-to-late 2022). The starting point after this step is 115M documents and 1.37B images.","CC-MAIN-2019-18 (WET)","Allenai-multimodal-c4 (mmc4)","","" "Marius Løvold Jørgensen – UiT, The Arctic University of Norway, Norway","BacklinkDB: A Purpose-Built Backlink Database Management System","https://munin.uit.no/handle/10037/28861","papers","20230101Z00:00:00","","In order to compile a list of all the backlinks for a given webpage, we need knowledge about all the outgoing links on the web. Traversing the web and storing all the backlink data in a database allows us to efficiently retrieve the list of backlinks for a web page on demand. However, the web consists of billions of backlinks which translates to terabytes of data. As the web is continuously evolving, the database needs to be rebuilt periodically in order for it to closely resemble the current state of the web. This thesis presents BacklinkDB, a purpose-built database management system designed for managing a backlink database. Using a series of in-memory hash indices allows for high insert throughput when building the database. The backlink data for a given domain is stored together in sections throughout the database file. This allows for the requested backlink data to be easily located. With a simple sql-inspired query language, the users can both insert and retrieve backlink data. The evaluation shows that building a purpose-built database management sys- tem allows us to make the trade-offs between which performance metrics that is important. In this thesis, we will focus on creating a scalable backlink database management system with high insert performance","UiT, The Arctic University of Norway, Norway","web-science/hyperlinkgraph, ir/backlinkdb","5.1.3 Data¶ The link data used in the experiments is downloaded from the Common Crawls website1. Common Crawl is a non-profit organization that periodically crawls the web and publicizes data. For the experiments described in this chapter, data from the August 2022 crawl² [²https://commoncrawl.org/2022/08/august-2022-crawl-archive-now-available/] is used.¶ Data prepossessing¶ Common Crawl provides data on all the indexable webpages. This data is provided in a series of warc files found in their public repository. Common Crawl also provide WAT files which are produced by processing the warc files and extracting the metadata for each webpage. The WAT files contain a list of all the outgoing links for each of the webpages.¶ All external links from the WAT file are extracted to their own link file so that they can be directly inserted into a database. Each link is stored on a separate line in the file using spaces to separate the source domain, source path, destination domain, and destination path. All the backlinks containing urls longer than 2048 characters are discarded. A link file is created for each of the WAT files. These link files contain all the information needed to build a backlink database.","CC-MAIN-2022-33 (WAT)","","","" "Stefano Calzavara, Florian Hantke, Moritz Wilhelm, Alvise Rabitti, Ben Stock – CISPA Helmholtz Center for Information Security, Germany; Università Ca’ Foscari, Venezia, Italy","You Call This Archaeology? Evaluating Web Archives for Reproducible Web Security Measurements","https://swag.cispa.saarland/papers/calzavara2023archaeology.pdf","papers","20230101Z00:00:00","","Given the dynamic nature of the Web, security measurements on it suffer from reproducibility issues. In this paper we take a systematic look into the potential of using web archives for web security measurements. We first evaluate an extensive set of web archives as potential sources of archival data, showing the superiority of the Internet Archive with respect to its competitors. We then assess the appropriateness of the Internet Archive for historical web security measurements, detecting subtleties and possible pitfalls in its adoption. Finally, we investigate the feasibility of using the Internet Archive to simulate live security measurements, using recent archival data in place of live data. Our analysis shows that archive-based security measurements are a promising alternative to traditional live security measurements, which is reproducible by design; nevertheless, it also shows potential pitfalls and shortcomings of archive-based measurements. As an important contribution, we use the collected knowledge to identify insights and best practices for future archive-based security measurements.","CISPA Helmholtz Center for Information Security, Germany; Università Ca’ Foscari, Venezia, Italy","computer-security/internet-security, web-science","Besides Memento-based archives, we also consider Common Crawl as a possible alternative source of archival data. Common Crawl archives parts of the Web once a month and stores the content as one snapshot. The reason why we use Common Crawl is that it contains a massive amount of data: its October 2022 snapshot includes more than 2.55 billion pages, with its index alone being larger than 2TB; moreover, Common Crawl was already used in a previous web security measurement [ 15, 36]. The content archived on Common Crawl is stored in form of large compressed files consisting of lists of WARC files. These WARC files hold meta information such as the requested datetime, content type, or content size, followed by the archived content.","","","","" "Peter Henderson, Xuechen Li, Dan Jurafsky, Tatsunori Hashimoto, Mark A. Lemley, Percy Liang – Stanford University, USA","Foundation Models and Fair Use","https://arxiv.org/abs/2303.15715","papers","20230101Z00:00:00","","Existing foundation models are trained on copyrighted material. Deploying these models can pose both legal and ethical risks when data creators fail to receive appropriate attribution or compensation. In the United States and several other countries, copyrighted content may be used to build foundation models without incurring liability due to the fair use doctrine. However, there is a caveat: If the model produces output that is similar to copyrighted data, particularly in scenarios that affect the market of that data, fair use may no longer apply to the output of the model. In this work, we emphasize that fair use is not guaranteed, and additional work may be necessary to keep model development and deployment squarely in the realm of fair use. First, we survey the potential risks of developing and deploying foundation models based on copyrighted content. We review relevant U.S. case law, drawing parallels to existing and potential applications for generating text, source code, and visual art. Experiments confirm that popular foundation models can generate content considerably similar to copyrighted material. Second, we discuss technical mitigations that can help foundation models stay in line with fair use. We argue that more research is needed to align mitigation strategies with the current state of the law. Lastly, we suggest that the law and technical mitigations should co-evolve. For example, coupled with other policy mechanisms, the law could more explicitly consider safe harbors when strong technical tools are used to mitigate infringement harms. This co-evolution may help strike a balance between intellectual property and innovation, which speaks to the original goal of fair use. But we emphasize that the strategies we describe here are not a panacea and more work is needed to develop policies that address the potential harms of foundation models.","Stanford University, USA","legal/copyright, legal/fair-use, nlp/language-model, ai/foundation-model, web-crawling, robots.txt","Implied Licenses and Common Crawl. On the other hand, many creators voluntarily post their works on the internet with permissions for web crawling. It is well-established that merely posting something on the internet does not waive the intellectual property interest in the work, but many data creators use an industry-standard “robots.txt{""} file to affirmatively to include their website and data in caches and search indexes. In Field v. Google, Inc. (D. Nev. 2006) a district court held that Google could cache web content that did not disallow scraping via robots.txt, suggesting that there was an implied license and thus the use was not infringement. This license only extended to caching in that case, which does not necessarily reflect the uses of foundation models we discuss throughout this work, so it is unlikely to cover all the use cases we describe here. And the bounds of the uses covered by the robots.txt file are untested in court.21 While the issue of whether the implied license extends to foundation model training has not been resolved in litigation, it is possible that an outcome like Field v. Google, Inc. (D. Nev. 2006) would extend to some foundation model uses—in particular, for building a cached dataset and training a model.¶ It is worth noting that the use of a robots.txt header or other opt-out mechanism has implications for fair use also. Datasets and models like C4 (Raffel et al., 2019) and LAION-400M (Schuhmann, 2021), rely on CommonCrawl data which is crawled only if users explicitly allow it through their robots.txt file. CommonCrawl is able to host a snapshot of the internet largely because of fair use arguments. As the organization’s director argues, there is a transformation into a different—not easily human-readable—format, the organization does not take a snapshot of entire webpages, and the use itself is transformative (from actively presenting content to caching content) and for the public benefit (Leetaru, 2017). In Field v. Google, Inc. (D. Nev. 2006), respect for the robots.txt file also was considered in the fair use assessment with the court noting that Google in good faith followed industry standards that would prevent caching (respecting disallowing crawling via a robots.txt). It is possible, then, that providing an opt-out mechanism for data creators and respecting the robots.txt opt-out mechanism will be taken into account in assessing a fair use argument, as it was in Field v. Google, Inc. (D. Nev. 2006).²²¶ [...] Furthermore, if web-crawled data is used, restricting it to data that respects robots.txt opt-outs can make a fair use argument more tractable, though not guaranteed. As we noted before, in Field v. Google, Inc. (D. Nev. 2006), respect for the robots.txt file was considered in the fair use assessment with the court because it gave the plaintiff opportunity to opt out. This is likely why many webcrawl-based models rely on the CommonCrawl dataset as a source. Its webcrawl automatically respects robots.txt opt-outs and does not crawl every webpage in full. It is possible then that future fair use assessments could consider respecting the robots.txt opt-out—or implementing other opt-out mechanisms—favorably, as was the case in Field v. Google, Inc. (D. Nev. 2006). Conversely, ignoring a robots.txt opt-out could negatively impact a fair use assessment. However, Kapoor & Narayanan (2023) have argued that there are structural critiques of opt-out mechanisms beyond the current state of the law.¶","","","","" "Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, Gideon Mann – Bloomberg, New York, NY, USA; Bloomberg, Toronto, ON, Canada; Computer Science, Johns Hopkins University, Baltimore, MD, USA","BloombergGPT: A Large Language Model for Finance","https://arxiv.org/abs/2303.17564","papers","20230101Z00:00:00","","The use of NLP in the realm of financial technology is broad and complex, with applications ranging from sentiment analysis and named entity recognition to question answering. Large Language Models (LLMs) have been shown to be effective on a variety of tasks; however, no LLM specialized for the financial domain has been reported in literature. In this work, we present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg's extensive data sources, perhaps the largest domain-specific dataset yet, augmented with 345 billion tokens from general purpose datasets. We validate BloombergGPT on standard LLM benchmarks, open financial benchmarks, and a suite of internal benchmarks that most accurately reflect our intended usage. Our mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks. Additionally, we explain our modeling choices, training process, and evaluation methodology. We release Training Chronicles (Appendix C) detailing our experience in training BloombergGPT.","Bloomberg, New York, NY, USA; Bloomberg, Toronto, ON, Canada; Computer Science, Johns Hopkins University, Baltimore, MD, USA","nlp/language-models, nlp/large-language-models, nlp/dataset-creation, financial markets, cc-cited-not-used","","","","","" "Joey Öhman, Severine Verlinden, Ariel Ekgren, Amaru Cuba Gyllensten, Tim Isbister, Evangelia Gogoulou, Fredrik Carlsson, Magnus Sahlgren – AI Sweden, Sweden; RISE, Sweden","The Nordic Pile: A 1.2TB Nordic Dataset for Language Modeling","https://arxiv.org/abs/2303.17183","papers","20230101Z00:00:00","","Pre-training Large Language Models (LLMs) require massive amounts of text data, and the performance of the LLMs typically correlates with the scale and quality of the datasets. This means that it may be challenging to build LLMs for smaller languages such as Nordic ones, where the availability of text corpora is limited. In order to facilitate the development of the LLMS in the Nordic languages, we curate a high-quality dataset consisting of 1.2TB of text, in all of the major North Germanic languages (Danish, Icelandic, Norwegian, and Swedish), as well as some high-quality English data. This paper details our considerations and processes for collecting, cleaning, and filtering the dataset.","AI Sweden, Sweden; RISE, Sweden","nlp/corpus-construction, nlp/text-corpora, nlp/language-model","Therefore, The Nordic Pile is composed mostly of existing sources, with a large por- tion of these originating from derivatives of Common Crawl data, such as OSCAR (Suárez et al., 2019; Ortiz Suárez et al., 2020) and Multilingual C4 (mC4) (Xue et al., 2021), which is a language- filtered version of C4 (Raffel et al., 2020).¶ [...] Web CC: Web data derived from Common Crawl¶ Similarly, Web CC is the most prominent of our categories.","","","","" "Dong Zhang – ","Should ChatGPT and Bard Share Revenue with Their Data Providers? A New Business Model for the AI Era","https://arxiv.org/abs/2305.02555","papers","20230101Z00:00:00","","With various AI tools such as ChatGPT becoming increasingly popular, we are entering a true AI era. We can foresee that exceptional AI tools will soon reap considerable profits. A crucial question arise: should AI tools share revenue with their training data providers in additional to traditional stakeholders and shareholders? The answer is Yes. Large AI tools, such as large language models, always require more and better quality data to continuously improve, but current copyright laws limit their access to various types of data. Sharing revenue between AI tools and their data providers could transform the current hostile zero-sum game relationship between AI tools and a majority of copyrighted data owners into a collaborative and mutually beneficial one, which is necessary to facilitate the development of a virtuous cycle among AI tools, their users and data providers that drives forward AI technology and builds a healthy AI ecosystem. However, current revenue-sharing business models do not work for AI tools in the forthcoming AI era, since the most widely used metrics for website-based traffic and action, such as clicks, will be replaced by new metrics such as prompts and cost per prompt for generative AI tools. A completely new revenue-sharing business model, which must be almost independent of AI tools and be easily explained to data providers, needs to establish a prompt-based scoring system to measure data engagement of each data provider. This paper systematically discusses how to build such a scoring system for all data providers for AI tools based on classification and content similarity models, and outlines the requirements for AI tools or third parties to build it. Sharing revenue with data providers using such a scoring system would encourage more data owners to participate in the revenue-sharing program. This will be a utilitarian AI era where all parties benefit.","","legal/copyright, legal/fair-use, nlp/language-model, ai/foundation-model, economic aspects of large language models, monetization of training data","","","","","" "Yangsibo Huang, Samyak Gupta, Zexuan Zhong, Kai Li, Danqi Chen – ","Privacy Implications of Retrieval-Based Language Models","https://arxiv.org/abs/2305.14888","papers","20230101Z00:00:00","","Retrieval-based language models (LMs) have demonstrated improved interpretability, factuality, and adaptability compared to their parametric counterparts, by incorporating retrieved text from external datastores. While it is well known that parametric models are prone to leaking private data, it remains unclear how the addition of a retrieval datastore impacts model privacy. In this work, we present the first study of privacy risks in retrieval-based LMs, particularly kNN-LMs. Our goal is to explore the optimal design and training procedure in domains where privacy is of concern, aiming to strike a balance between utility and privacy. Crucially, we find that kNN-LMs are more susceptible to leaking private information from their private datastore than parametric models. We further explore mitigations of privacy risks. When privacy information is targeted and readily detected in the text, we find that a simple sanitization step would completely eliminate the risks, while decoupling query and key encoders achieves an even better utility-privacy trade-off. Otherwise, we consider strategies of mixing public and private data in both datastore and encoder training. While these methods offer modest improvements, they leave considerable room for future work. Together, our findings provide insights for practitioners to better understand and mitigate privacy risks in retrieval-based LMs. Our code is available at: [https://github.com/Princeton-SysML/kNNLM_privacy].","","","","","","","" "Shayne Longpre, Gregory Yauney, Emily Reif, Katherine Lee, Adam Roberts, Barret Zoph, Denny Zhou, Jason Wei, Kevin Robinson, David Mimno, Daphne Ippolito – ","A Pretrainer's Guide to Training Data: Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity","https://arxiv.org/abs/2305.13169","papers","20230101Z00:00:00","","Pretraining is the preliminary and fundamental step in developing capable language models (LM). Despite this, pretraining data design is critically under-documented and often guided by empirically unsupported intuitions. To address this, we pretrain 28 1.5B parameter decoder-only models, training on data curated (1) at different times, (2) with varying toxicity and quality filters, and (3) with different domain compositions. First, we quantify the effect of pretraining data age. A temporal shift between evaluation data and pretraining data leads to performance degradation, which is not overcome by finetuning. Second, we explore the effect of quality and toxicity filters, showing a trade-off between performance on standard benchmarks and risk of toxic generations. Our findings indicate there does not exist a one-size-fits-all solution to filtering training data. We also find that the effects of different types of filtering are not predictable from text domain characteristics. Lastly, we empirically validate that the inclusion of heterogeneous data sources, like books and web, is broadly beneficial and warrants greater prioritization. These findings constitute the largest set of experiments to validate, quantify, and expose many undocumented intuitions about text pretraining, which we hope will help support more informed data-centric decisions in LM development.","","","","","","","" "Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli – Meta AI; Hebrew University of Jerusalem, Israel","Scaling Speech Technology to 1,000+ Languages","https://arxiv.org/abs/2305.13516","papers","20230101Z00:00:00","","Expanding the language coverage of speech technology has the potential to improve access to information for many more people. However, current speech technology is restricted to about one hundred languages which is a small fraction of the over 7,000 languages spoken around the world. The Massively Multilingual Speech (MMS) project increases the number of supported languages by 10-40x, depending on the task. The main ingredients are a new dataset based on readings of publicly available religious texts and effectively leveraging self-supervised learning. We built pre-trained wav2vec 2.0 models covering 1,406 languages, a single multilingual automatic speech recognition model for 1,107 languages, speech synthesis models for the same number of languages, as well as a language identification model for 4,017 languages. Experiments show that our multilingual speech recognition model more than halves the word error rate of Whisper on 54 languages of the FLEURS benchmark while being trained on a small fraction of the labeled data.","Meta AI; Hebrew University of Jerusalem, Israel","nlp/speech-recognition, nlp/language-model","We evaluate this single model on FLEURS, CommonVoice, VoxPopuli and MLS. [...] During inference, we use n-gram models trained on CommonCrawl data. [...] ¶ Identifying Biased Words. We were not able to find speakers for most of the considered languages of this study and therefore use the following automatic procedure to determine religious words: for each word that occurs in the training data of MMS-lab, we compare the relative token frequency, that is, the rate at which the word type occurs in the MMS-lab data, to the relative token frequency in a general domain corpus; we use Common Crawl [Conneau et al., 2020b] as a general domain corpus. If the relative word frequency is at least twice as high in MMS-lab compared to Common Crawl, then we add it to the subset of words we include in our study. This enables us to evaluate on 51 languages of the FLEURS corpus since not all languages are covered by MMS-lab and we also need to find data in Common Crawl for each language. The automatic procedure has the added benefit of avoiding any potential biases introduced by human annotators. [...]¶ B n-gram Language Models¶ We train 5-gram language models on Common Crawl data using KenLM [Heafield, 2011] for each language in FLEURS. For languages that do not use spaces to separate words, we train 20-gram character-level language models. These languages are Mandarin Chinese (cmn), Cantonese Chinese (yue), Japanese (jpn), Thai (tha), Lao (lao), Burmese (mya) and Khmer (khm). The text is pre- processed following § 3.1.2 and we also remove emojis.³³","","","","" "Tetsuya Sakai, Sijie Tao, Nuo Chen, Yujing Li, Maria Maistro, Zhumin Chu, Nicola Ferro – Waseda University, Japan; University of Copenhagen, Denmark; Tsinghua University, P. R. C.; University of Padua, Italy","On the Ordering of Pooled Web Pages, Gold Assessments, and Bronze Assessments","https://doi.org/10.1145/3600227","papers","20230101Z00:00:00","web search, relevance assessments, pooling, test collections, information retrieval","The present study leverages a recent opportunity we had to create a new English web search test collection for the NTCIR-16 We Want Web (WWW-4) task, which concluded in June 2022. More specifically, through the test collection construction effort, we examined two factors that may affect the relevance assessments of depth-k pools, which in turn may affect the relative evaluation of different IR systems. The first factor is the document ordering strategy for the assessors, namely, prioritisation (PRI) and randomisation (RND). PRI is a method that has been used in NTCIR tasks for over a decade; it ranks the pooled documents by a kind of pseudorelevance for the assessors. The second factor is assessor type, i.e., Gold or Bronze. Gold assessors are the topic creators and therefore they “know” which documents are (highly) relevant and which are not; Bronze assessors are not the topic creators and may lack sufficient knowledge about the topics. We believe that our study is unique in that the authors of this paper served as the Gold assessors when creating the WWW-4 test collection, which enabled us to closely examine why Bronze assessments differ from the Gold ones. Our research questions examine assessor efficiency (RQ1), inter-assessor agreement (RQ2), system ranking similarity with different qrels files (RQ3), system ranking robustness to the choice of test topics (RQ4), and the reasons why Bronze assessors tend to be more liberal than Gold assessors (RQ5). The most remarkable of our results are as follows. Firstly, in the comparisons for RQ1 through RQ4, it turned out that what may matter more than the document ordering strategy (PRI vs. RND) and the assessor type (Gold vs. Bronze) is how well-motivated and/or well-trained the Bronze assessors are. Secondly, regarding RQ5, of the documents originally judged nonrelevant by the Gold assessors contrary to the Bronze assessors in our experiments, almost one half were truly relevant according to the Gold assessors’ own reconsiderations. This result suggests that even Gold assessors are far from perfect; budget permitting, it may be beneficial to hire highly-motivated Bronze assessors in addition to Gold assessors so that they can complement each other.","Waseda University, Japan; University of Copenhagen, Denmark; Tsinghua University, P. R. C.; University of Padua, Italy","ir/test-collection, ir/web-search, ir/search-engine-evaluation, nlp/corpus-construction","The WWW-4 task introduced a new English web corpus called Chuweb21, which was constructed based on the April 2021 block of Common Crawl dataset.⁹ [⁹ https://commoncrawl.org/2021/04/april-2021-crawl-archive-now-available/] Details of the corpus construction process can be found in the WWW-4 overview paper [38]. Chuweb21 contains 82,451,337 HTMLs or 1.69 TiB of compressed content; it is publicly available.¹⁰","","Chuweb21","","" "Hanlin Li, Nicholas Vincent, Stevie Chancellor, Brent Hecht – University of California, Berkeley, USA; University of California, Davis, USA; University of Minnesota, Minneapolis, USA; Northwestern University, Evanston, USA","The Dimensions of Data Labor: A Road Map for Researchers, Activists, and Policymakers to Empower Data Producers","https://arxiv.org/pdf/2305.13238.pdf","papers","20230101Z00:00:00","","Many recent technological advances (e.g. ChatGPT and search engines) are possible only because of massive amounts of user-generated data produced through user interactions with computing systems or scraped from the web (e.g. behavior logs, user-generated content, and artwork). However, data producers have little say in what data is captured, how it is used, or who it benefits. Organizations with the ability to access and process this data, e.g. OpenAI and Google, possess immense power in shaping the technology landscape. By synthesizing related literature that reconceptualizes the production of data for computing as ``data labor'', we outline opportunities for researchers, policymakers, and activists to empower data producers in their relationship with tech companies, e.g advocating for transparency about data reuse, creating feedback channels between data producers and companies, and potentially developing mechanisms to share data's revenue more broadly. In doing so, we characterize data labor with six important dimensions - legibility, end-use awareness, collaboration requirement, openness, replaceability, and livelihood overlap - based on the parallels between data labor and various other types of labor in the computing literature.","University of California, Berkeley, USA; University of California, Davis, USA; University of Minnesota, Minneapolis, USA; Northwestern University, Evanston, USA","legal/copyright, cc-citet-not-used, user-generated data, empowerment, data leverage","For example, publicly available texts and artwork enabled the creation of generative AI models like ChatGPT and Dall- E because model developers were able to scrape and process data from billions of web pages¹. [¹https://commoncrawl.org/2022/10/sep-oct-2022-crawl-archive-now-available/]","","","","" "Mohamed Raouf Kanfoud, Abdelkrim Bouramoul – University of Constantine 2 – Abdelhamid Mehri, El Khroub, Algeria","Tackling the multilingual and heterogeneous documents with the pre-trained language identifiers","https://doi.org/10.1080/1206212X.2023.2218236","papers","20230101Z00:00:00","","The Web has become one of the most important data sources, and the content shared is most often multilingual, as users belong to different cultures and speak different languages. Multilingual content (document) is not suitable for many people who only need content in one language. Furthermore, dividing a multilingual document into monolingual documents helps researchers extract only the text of the desired language to use in different tasks such as training or model testing. Therefore, it is challenging to clean and divide the raw content manually. This paper presents an automatic approach to dividing a multilingual document and reassembling it into monolingual documents by examining three existing state-of-the-art tools for Language Identification (LI). We prepared different corpora with different heterogeneity characteristics for the evaluation and evaluated their code-switching pattern using three different code-switching metrics. The proposed approach reached 99% as the best accuracy result for the long segment (long text) and 90% for the mixed segment. In addition, a good correlation was found between the I-Index and accuracy with Pearson’s r = −0.998.","University of Constantine 2 – Abdelhamid Mehri, El Khroub, Algeria","nlp/language-identification, nlp/corpus-construction, multi-lingual documents","The authors collected data from a non-profit foundation, Common Crawl, which explores the Web and provides data freely to the public. The collected datasets are heterogeneous and multilingual.","","","","" "Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Baptiste Pannier, Ebtesam Almazrouei, Julien Launay – LightOn; Technology Innovation Institute, Abu Dhabi, United Arab Emirates; LPENS, École normale supérieure, Paris, France","The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only","https://falconllm.tii.ae/Falcon_LLM_RefinedWeb.pdf","papers","20230101Z00:00:00","","Large language models are commonly trained on a mixture of filtered web data and curated “high-quality” corpora, such as social media conversations, books, or technical papers. This curation process is believed to be necessary to produce performant models with broad zero-shot generalization abilities. However, as larger models requiring pretraining on trillions of tokens are considered, it is unclear how scalable is curation and whether we will run out of unique high-quality data soon. At variance with previous beliefs, we show that properly filtered and deduplicated web data alone can lead to powerful models; even significantly outperforming models from the state-of-the-art trained on The Pile. Despite extensive filtering, the high-quality data we extract from the web is still plentiful, and we are able to obtain five trillion tokens from CommonCrawl. We publicly release an extract of 600 billion tokens from our REFINEDWEB dataset, and 1.3/7.5B parameters language models trained on it*.","LightOn; Technology Innovation Institute, Abu Dhabi, United Arab Emirates; LPENS, École normale supérieure, Paris, France","nlp/language-models, nlp/large-language-models, nlp/text-corpora","Pipelines for web data. Massive web datasets are typically built upon CommonCrawl, a publicly available scrape of the internet, which has now been running for 12 years and has collected petabytes of data. [...] We introduce MDR (MacroData Refinement), a pipeline for filtering and deduplicating web data from CommonCrawl at very large scale. [...] CommonCrawl is available in either WARC (raw HTML response), or WET files (preprocessed to only include plain text). Individual files correspond to a page at a given URL; these constitute single documents/samples. Working with WET files would spare us from running our own HTML extraction; however, in line with previous works (Gao et al., 2020; Rae et al., 2021), we found WET files to include undesirable navigation menus, ads, and other irrelevant texts. Accordingly, our pipeline starts from raw WARC files, read with the warcio library. [...] RefinedWeb is built using all CommonCrawl dumps until the 2023-06 one; it could be updated with additional dumps as they are released. The public release of RefinedWeb is a 600GT random extract of the 5,000GT of the full dataset. For all experiments, we randomly sampled from the public extract, or earlier development versions of it.","“using all CommonCrawl dumps until the 2023-06 one” (WARC files)","","","" "Tom Taulli – ","Data: The Fuel for Generative AI","https://doi.org/10.1007/978-1-4842-9367-6_2","papers","20230101Z00:00:00","","A large language model (LLM) processes huge amounts of data for its generative AI systems. They are on the scale of petabytes. Consider that a petabyte is 1000 terabytes. This would hold about 500 billion pages of standard text. No doubt, the generative models for images and videos are much larger.","","nlp/large-language-models, cc-cited-not-used","","","","","" "Gilles Adda, Ioana Vasilescu, François Yvon – Université Paris-Saclay, CNRS, LISN, Paris, France","Language Report French","https://doi.org/10.1007/978-3-031-28819-7_16","papers","20230101Z00:00:00","","This chapter presents a survey of the current state of technologies for the automatic processing of the French language. It is based on a thorough analysis of existing tools and resources for French, and also provides an accurate presentation of the domain and its main stakeholders (Adda et al. 2022). The chapter documents the presence of French on the internet and describes in broad terms the existing technologies for the French language. It also spells out general conclusions and formulates recommendations for progress towards deep language understanding for French.","Université Paris-Saclay, CNRS, LISN, Paris, France","nlp/resources, French, nlp/language-models, nlp/text-corpora","The CommonCrawl project aggregates Web data that is orders of magnitude larger than these resources; and it is updated on a regular basis. Using French subsets of CommonCrawl, it has been possible to train large language models (LMs): FlauBERT uses a corpus of 12B running words, while CamemBERT uses the 22B words OSCAR. Other large LMs for French are available for research and commercial use; they help to boost the state-of-the-art for multiple NLP tasks.","","","","" "Asaad Alghamdi, Xinyu Duan, Wei Jiang, Zhenhai Wang, Yimeng Wu, Qingrong Xia, Zhefeng Wang, Yi Zheng, Mehdi Rezagholizadeh, Baoxing Huai, Peilun Cheng, Abbas Ghaddar – AI Cognitive Team, Tonomus; Huawei Cloud Computing Technologies Co., Ltd.; Huawei Technologies Co., Ltd.","AraMUS: Pushing the Limits of Data and Model Scale for Arabic Natural Language Processing","https://arxiv.org/abs/2306.06800","papers","20230101Z00:00:00","","","AI Cognitive Team, Tonomus; Huawei Cloud Computing Technologies Co., Ltd.; Huawei Technologies Co., Ltd.","nlp/language-models, nlp/large-language-models, nlp/text-corpora","We mainly leverage all (up to July 2022) of the 90 Common Crawl³ monthly web scrapes in order to collect massive amount of Arabic textual data. [...] Our pre-training corpus is mainly sourced from the publicly available web scrapes of the Common Crawl (CC) project. We downloaded 90 shards of CC monthly data ranging from May 2013 (the earliest available) up to July 2022. Also, we use [...]","","","","" "Denley Lam, Letitia Li, Cory Anderson – FAST Labs, BAE Systems, Arlington, VA, USA","PDF investigation with parser differentials and ontology","https://www.techrxiv.org/articles/preprint/PDF_investigation_with_parser_differentials_and_ontology/23290277","papers","20230101Z00:00:00","","","FAST Labs, BAE Systems, Arlington, VA, USA","data formats, PDF, PDF parsing, information-security, computer-security","Three thousand and twenty one error regexes were gathered by running our set of PDF parsers through Govdocs1 [27] and Common Crawl [28], a collection of nearly one million freely distributable document files.","","","","" "Joel E. Fischer – Mixed Reality Laboratory, School of Computer Science, University of Nottingham, United Kingdom","Generative AI Considered Harmful","https://doi.org/10.1145/3571884.3603756","papers","20230101Z00:00:00","","","Mixed Reality Laboratory, School of Computer Science, University of Nottingham, United Kingdom","ai/ethics-of-machine-learning, nlp/large-language-models, nlp/generative-language-models, cc-cited-not-used","[⁷This article is written for the CUI 2023 “Provocations” track that “should have the potential to spark debate and discussion at the conference”] [...] It's worth noting that the lack of attribution starts with Common Crawl and similar archives; they appear to erase authorship and ownership of its sources, the largely human-written contents on websites. Instead of a heterogeneous collection of web sites (i.e., the WWW), it becomes just one homogeneous and anonymous “dataset”. This coincides with a worrying trend of these archives to frame their work as contributing to notions of “open data” and asking for “data donation” without explicating stance on ownership (you lose it) and attribution (there is none)¹⁰. [¹⁰https://commoncrawl.org/big-picture/what-you-can-do/]","","","","" "Andrea Stocco, Alexandra Willi, Luigi Libero Lucio Starace, Matteo Biagiola, Paolo Tonella – Università della Svizzera italiana, Switzerland; Università degli Studi di Napoli Federico II, Italy","Neural Embeddings for Web Testing","https://arxiv.org/pdf/2306.07400.pdf","papers","20230101Z00:00:00","","","Università della Svizzera italiana, Switzerland; Università degli Studi di Napoli Federico II, Italy","web-testing, nlp/word-embeddings, neural-embeddings, GUI-testing","We use three existing datasets available from the study by Yandrapally et al. [12], plus an additional dataset of web pages collected by the Common Crawl project [38]. [...] For training Doc2Vec, we used an additional dataset (listed third in Table 1) of 368,927 web pages available from the Common Crawl project [38], also used in previous research [19]. We refer to this dataset as CC. Similarly to DS, the web pages in CC are also collected by crawling real-world websites.","","","","" "Yanchen Wang, Lisa Singh – Georgetown University, USA","Adding guardrails to advanced chatbots","https://arxiv.org/pdf/2306.07500.pdf","papers","20230101Z00:00:00","","","Georgetown University, USA","ai/ethics-of-machine-learning, nlp/large-language-models, nlp/generative-language-models, cc-cited-not-used","Our analysis confirms that ChatGPT learns everything from human, including their biases. According to OpenAI, 60% of the training data come from Common Crawl, a large data set consisting of web pages, extracted metadata and text extractions through a big web crawler since 2008. Another 22% of data are from WebText2, containing all Reddit posts until December 2017 that have a score of 3 or higher. Another 16% are from books [29 ]. In their training data, more than 80% of the data are from the Internet and online discussions. Researchers have already shown that online discussions are very biased [30,31,32,33].","","","","" "Stacey Taylor, Vlado Keselj – Dalhousie University","Don’t Worry Accountants, ChatGPT Won’t Be Taking Your Job... Yet","https://web.cs.dal.ca/~vlado/papers/cai23s.pdf","papers","20230101Z00:00:00","","ChatGPT has demonstrated the ability to generate plausible human-like text and research is underway to evaluate and benchmark its current performance in various do- mains. The research we present here provides a preliminary benchmark on ChatGPT’s ability to emulate the style and information presented in financial statement note disclo- sures. Using text from Canada’s major banks (n = 5) over the period of 2019–2021, we query ChatGPT to generate two required note disclosures and compare its text against the note disclosures written by the banks in their corporate annual reports. We find that the similarity between ChatGPT’s text and the human-authored text is very low, but also find that ChatGPT’s text is significantly more readable for one of the two disclosures (p < 0.05).","Dalhousie University","ChatGPT, Machine Learning, Financial Statements, Similarity, Stylometry, Readability","Finally, ChatGPT was trained on the common crawl web corpora which consists of 12 years of common crawl data [30 [T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. “Language models are few-shot learners”. In: Advances in neural information processing systems 33 (2020), pp. 1877–1901.]]. That means that for each of the 5 banks, there are only 12 annual reports that ChatGPT has seen. This could have a material effect on the outcome of its generation.","","","","" "Kyle Steinfeld – University of California, Berkeley, CA, USA","Clever little tricks: A socio-technical history of text-to-image generative models","https://doi.org/10.1177/14780771231168230","papers","20230101Z00:00:00","","The emergence of text-to-image generative models (e.g., Midjourney, DALL-E 2, Stable Diffusion) in the summer of 2022 impacted architectural visual culture suddenly, severely, and seemingly out of nowhere. To contextualize this phenomenon, this text offers a socio-technical history of text-to-image generative systems. Three moments in time, or “scenes,” are presented here: the first at the advent of AI in the middle of the last century; the second at the “reawakening” of a specific approach to machine learning at the turn of this century; the third that documents a rapid sequence of innovations, dubbed “clever little tricks,” that occurred across just 18 months. This final scene is the crux, and represents the first formal documentation of the recent history of a specific set of informal innovations. These innovations were produced by non-affiliated researchers and communities of creative contributors, and directly led to the technologies that so compellingly captured the architectural imagination in the summer of 2022. Across these scenes, we examine the technologies, application domains, infrastructures, social contexts, and practices that drive technical research and shape creative practice in this space.","University of California, Berkeley, CA, USA","ai/text-to-image-models, ai/generative-models, architecture, architectural visual culture","The LAION-400 dataset consists of 400 million image-caption pairs extracted from random selections of web pages from a web scrape that captured sites between 2014 and 2021 that was conducted by Common Crawl (a separate non- profit established in 2011 “with the goal of democratizing access to web information by producing and maintaining an open repository of web crawl data”).⁷⁵ [⁷⁵ Gil E. About common crawl. 2011. https://commoncrawl.org/about/ (accessed 04 December 2022).] Although it specifically was “not meant for any real-world production or application,”⁷⁶ [⁷⁶ Schuhmann C. LAION-400-Million open dataset. December 12, 2022. https://laion.ai/blog/laion-400-open-dataset (accessed 04 December 2022).] this dataset was used by Google to train its text-to-image generative model “Imagen” in 2022.⁷⁷ [⁷⁷ Saharia C, Chan W, Saxena S, et al. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. Epub ahead of print May 2022. DOI: 10.48550/arXiv.2205.11487]","","","","" "Zahra Moti, Asuman Senol, Hamid Bostani, Frederik Zuiderveen Borgesius, Veelasha Moonsamy, Arunesh Mathur, Gunes Acar – Radboud University, Netherlands; imec-COSIC, KU Leuven, Belgium; Ruhr University Bochum, Germany","Targeted and Troublesome: Tracking and Advertising on Children's Websites","https://arxiv.org/pdf/2308.04887.pdf","papers","20230101Z00:00:00","","On the modern web, trackers and advertisers frequently construct and monetize users' detailed behavioral profiles without consent. Despite various studies on web tracking mechanisms and advertisements, there has been no rigorous study focusing on websites targeted at children. To address this gap, we present a measurement of tracking and (targeted) advertising on websites directed at children. Motivated by lacking a comprehensive list of child-directed (i.e., targeted at children) websites, we first build a multilingual classifier based on web page titles and descriptions. Applying this classifier to over two million pages, we compile a list of two thousand child-directed websites. Crawling these sites from five vantage points, we measure the prevalence of trackers, fingerprinting scripts, and advertisements. Our crawler detects ads displayed on child-directed websites and determines if ad targeting is enabled by scraping ad disclosure pages whenever available. Our results show that around 90% of child-directed websites embed one or more trackers, and about 27% contain targeted advertisements--a practice that should require verifiable parental consent. Next, we identify improper ads on child-directed websites by developing an ML pipeline that processes both images and text extracted from ads. The pipeline allows us to run semantic similarity queries for arbitrary search terms, revealing ads that promote services related to dating, weight loss, and mental health; as well as ads for sex toys and flirting chat services. Some of these ads feature repulsive and sexually explicit imagery. In summary, our findings indicate a trend of non-compliance with privacy regulations and troubling ad safety practices among many advertisers and child-directed websites. To protect children and create a safer online environment, regulators and stakeholders must adopt and enforce more stringent measures.","Radboud University, Netherlands; imec-COSIC, KU Leuven, Belgium; Ruhr University Bochum, Germany","web-science/tracking, web-science/advertisements, computer-security/internet-security","Applying the classifier to the Common Crawl dataset [32], we compiled a list of 2K manually verified child-directed websites. [...] Our preliminary analysis of over 500K web pages from the most popular one million websites in the Common Crawl dataset [32] showed that more than 97% of the websites have a title, 63% of the websites include a description, and 24% contain a keywords meta tag. [...] Applying this method to the WAT metadata files from the June-July 2022 Common Crawl snapshot [32], we extracted the titles and descriptions, limiting ourselves to the top million websites in the Tranco [26] or the CrUX [82] list. [...] [32] “June/July 2022 crawl archive now available – Common Crawl,” https://commoncrawl.org/2022/07/june-july-2022-crawl-archive-now-available, 2023, [Online; accessed 28. Feb. 2023].","","","","" "Juhani Luotolahti, Jenna Kanerva, Jouni Luoma, Valtteri Skantsi, Sampo Pyysalo, Veronika Laippala, Filip Ginter – University of Turku, Finland; University of Oulu, Finland","Finnish Internet Parsebank","https://www.researchsquare.com/article/rs-3138153/v1","papers","20230101Z00:00:00","","We present a Finnish web corpus with multiple text sources and rich additional annotations. The corpus is based in large parts on a dedicated Internet crawl, supplementing data from the Common Crawl initiative and the Finnish Wikipedia. The size of the corpus is 6.2 billion tokens from 9.5 million source documents. The text is enriched with morphological analyses, word lemmas, dependency trees, named entities and text register (genre) identification. Paragraph-level scores of an n-gram language model, as well as paragraph duplication rate in each document are provided, allowing for further filtering of the dataset by the end user. Thanks to changes in the 2023 Finnish copyright legislation, the corpus is openly available for research purposes, and can also be accessed through the NoSketchEngine concordance tool and the dep search dependency tree query tool, all at https://turkunlp.org/finnish nlp.html.","University of Turku, Finland; University of Oulu, Finland","nlp/corpus-construction, language-specific corpus, web-as-corpus, nlp/dependency-tree-bank, Finnish","3.1 Data sources ¶ Our corpus is based on three primary data sources: Finnish Wikipedia, Common Crawl, and a custom web-crawl. [...] The Common Crawl dataset includes both plain text and raw HTML files, at the time without language metadata. We employed a language detection step using CLD3 as the language detector and MapReduce to download only the Finnish-language plaintext from the Amazon cloud service that hosts Common Crawl. As shown in Table2, this resulted in only a moderate amount of new data (3.2GB deduplicated text) ontop of Wikipedia (1.5GB deduplicated text). ¶ Consequently, we conducted a dedicated web crawl using the SpiderLing webcrawler (Suchomel & Pomikálek,2012). This web crawler is specifically designed forcollecting monolingual plaintext web corpora. It comprises a web crawling engine, atrigram-based language detector, and a boilerplate remover called Justext, which isresponsible for extracting plain text. Moreover, the crawler is lightweight and easyto run. The crawl was seeded with the list of all domain names in the.fi top-level domain, as well as the URLs of all Finnish text pages we gathered from CommonCrawl in the previous step. The crawl was carried out between 2014 and 2016. ¶ The final sizes of text obtained from the three sources are summarized in Table2, which shows that the dedicated webcrawl constitutes by far the largest portion of the final corpus. Note that in the newer versions of Common Crawl, a considerably stronger emphasis is placed on multilingual coverage, and the benefit of a dedicated webcrawl might be smaller but very unlikely to vanish entirely.","","","","" "R. Tenis – ","Modelling an Efficient URL Phishing Detection Approach Based on a Dense Network Model. Computer Systems Science & Engineering . 2023, Vol. 47 Issue 2, p2625-2641. 17p.","https://web.s.ebscohost.com/abstract?direct=true&profile=ehost&scope=site&authtype=crawler&jrnl=02676192&AN=169779920&h=WGjAKpK7ACB1ZcUfp8Ikhm9IcDPjsbjptgyhA5ityW47Z2oYK4JmZTEMhj6t1UhLOFgbraBWyMgS1NID6mz%2bcA%3d%3d&crl=c&resultNs=AdminWebAuth&resultLocal=ErrCrlNotAuth&crlhashurl=login.aspx%3fdirect%3dtrue%26profile%3dehost%26scope%3dsite%26authtype%3dcrawler%26jrnl%3d02676192%26AN%3d169779920","papers","20230101Z00:00:00","","The social engineering cyber-attack is where culprits mislead the users by getting the login details which provides the information to the evil server called phishing. The deep learning approaches and the machine learning are compared in the proposed system for presenting the methodology that can detect phishing websites via Uniform Resource Locator (URLs) analysis. The legal class is composed of the home pages with no inclusion of login forms in most of the present modern solutions, which deals with the detection of phishing. Contrarily, the URLs in both classes from the login page due, considering the representation of a real case scenario and the demonstration for obtaining the rate of false-positive with the existing approaches during the legal login pages provides the test having URLs. In addition, some model reduces the accuracy rather than training the base model and testing the latest URLs. In addition, a feature analysis is performed on the present phishing domains to identify various approaches to using the phishers in the campaign. A new dataset called the MUPD dataset is used for evaluation. Lastly, a prediction model, the Dense forward-backwards Long Short Term Memory (LSTM) model (d - FBLSTM), is presented for combining the forward and backward propagation of LSMT to obtain the accuracy of 98.5% on the initiated login URL dataset.","","computer-security/internet-security, web-security","The PhishTank provides the URLs for phishing to be gathered, and the Common Crawl provides the legal URLs.","","","","" "Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari Morcos – FAIR, Meta AI","D4: Improving LLM Pretraining via Document De-Duplication and Diversification","https://dmlr.ai/assets/accepted-papers/131/CameraReady/LLM_Data_Pruning_Paper_Camera_Ready.pdf","papers","20230101Z00:00:00","","","FAIR, Meta AI","nlp/large-language-models, nlp/corpus-construction, deduplication","We perform all of our training runs on a version of CommonCrawl pre-processed with a CCNet (Wenzek et al., 2019) pipeline identical to the one used by Touvron et al. (2023). We add an additional step of MinHash-based de-duplication (see more details in Section A.1). Applying this common step before our experiments guarantees that any effects observed in our experiments complement the currently prevalent approach of MinHash-based data de-duplication strategies. Throughout the rest of this work, we refer to this dataset as CC-dedup. [...] A.1.2. DATASET CURATION DETAILS In this subsection, we describe how we curate CC-dedup, the starting source dataset used throughout the paper. We start with 5 CommonCrawl dumps³ [³https://commoncrawl.org/the-data/get-started/] which range from 2017 to 2020. We then use CC-net (Wenzek et al., 2019), to de-duplicate data at the paragraph level, remove non-English web pages, and filter out low-quality pages. The pipeline we use is identical to the pipeline used in Touvron et al. (2023) (see the section after the subtitle ”English CommonCrawl [67%]”, within Section 2). On top of this, we add an additional step of MinHash (Broder, 1997) de-duplication at the document-level. The parameters for MinHash are 20 hashes per signature, 20 buckets, and 1 row per bucket. These parameters are the default parameters in the spark implementation of MinHashLSH, and we did not do a hyperparameter sweep on these parameters due to compute limitations. Previous work has attempted running MinHash with much more aggressive parameters: Lee et al. (2021) and Penedo et al. use 20 buckets, 450 hashes per bucket, and 9000 signatures per hash. We conjecture that more aggressive MinHash would remove more templates, resulting in a higher-quality starting dataset, potentially making the SemDeDup step of D4 less necessary. Abbas et al. (2023) did find that the performance of MinHash from Lee et al. (2021) and SemDeDup are comparable at a fixed data selection ratio of 3.9% on C4, indicating that SemDeDup filters out similar data to aggressive MinHash does. We leave sweeping over these hyperparameters as future work. We note that since our dataset is curated from CommonCrawl dumps, there is risk that our training set contains offensive or PII content. We note, however, that this risk is no more than that of standard language modeling curation such as Touvron et al. (2023), since we use the same pipeline to filter CommonCrawl dumps.","","","","" "Liang Wang, Hyojoon Kim, Prateek Mittal, Jennifer Rexford – Princeton University, USA","RAVEN: Stateless Rapid IP Address Variation for Enterprise Networks.","https://petsymposium.org/2023/files/papers/issue3/popets-2023-0077.pdf","papers","20230101Z00:00:00","privacy, traffic analysis, programmable data plane, P4, QUIC","Enterprise networks face increasing threats against the privacy of their clients. Existing enterprise services like Network Address Translation (NAT) offer limited privacy protection, at the cost of requiring per-flow state. In this paper, we introduce RAVEN (Rapid Address Variation for Enterprise Networks), a network-based privacy solution that is complementary to application-layer defenses. RAVEN protects privacy by frequently changing the client’s public IP address. With RAVEN, a client is not limited to using a single IP address at a given time, or even for a given connection. RAVEN goes further, breaking the association between packets that belong to the same connection by frequently changing the client’s IP address within a single connection. RAVEN achieves this through a novel division of labor: the client uses a transport protocol, like QUIC, that supports seamless connection migration, and decides when to switch its IP address, while the enterprise network actually changes the client’s IP address in a stateless manner at line rate and ensures end-to-end packet delivery. We implement RAVEN using QUIC and off-the-shelf programmable switches. We deploy RAVEN in a test IPv6 network and evaluate its defense against webpage fingerprinting attacks. Even with a strong adversary, the average precision of the best adaptive attacks drops from 0.96 to 0.84, with a 0.5% degradation in client throughput. When RAVEN changes IP addresses at unpredictable frequency, the precision of the best attacks falls to 0.78—the same effectiveness as WTF-PAD.","Princeton University, USA","computer-security/internet-security, privacy, internet traffic analysis","Webpages to fingerprint. To find webpages on GitHub Pages, we search the Common Crawl database [59] (Jan 2022 release) to extract URLs whose domain names end with “*.github.io”. From about 0.8 M URLs, we sampled 100 URLs as monitored webpages and 10,000 URLs as unmonitored. [...] [⁵⁹] The Common Crawl team. 2022. The Common Crawl Dataset. https://commoncrawl.org/.","CC-MAIN-2022-05","","","" "Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust – Google DeepMind; The University of Tokyo, Japan","A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis","https://arxiv.org/pdf/2307.12856.pdf","papers","20230101Z00:00:00","","Pre-trained large language models (LLMs) have recently achieved better generalization and sample efficiency in autonomous web navigation. However, the performance on real-world websites has still suffered from (1) open domainness, (2) limited context length, and (3) lack of inductive bias on HTML. We introduce WebAgent, an LLM-driven agent that can complete the tasks on real websites following natural language instructions. WebAgent plans ahead by decomposing instructions into canonical sub-instructions, summarizes long HTML documents into task-relevant snippets, and acts on websites via generated Python programs from those. We design WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new pre-trained LLMs for long HTML documents using local and global attention mechanisms and a mixture of long-span denoising objectives, for planning and summarization. We empirically demonstrate that our recipe improves the success on a real website by over 50%, and that HTML-T5 is the best model to solve HTML-based tasks; achieving 14.9% higher success rate than prior SoTA on the MiniWoB web navigation benchmark and better accuracy on offline task planning evaluation.","Google DeepMind; The University of Tokyo, Japan","nlp/language-models, web-agent, autonomous web navigation, autonomous web browsing","For the dataset, we prepare 100 WARC files (April 2019) from CommonCrawl, and pre-process the raw HTML by re- moving non-Unicode and alphanumeric documents and extracting subtrees around <label> elements that have for attribute, to reduce the noise in training corpus, which results in about 3.41M examples (Table 1).","","","","" "Hynek Kydlíček, Jindřich Libovický – Univerzita Karlova, Czech Republic","A Dataset and Strong Baselines for Classification of Czech News Texts","https://arxiv.org/pdf/2307.10666.pdf","papers","20230101Z00:00:00","News classification, NLP in Czech, News Dataset","Pre-trained models for Czech Natural Language Processing are often evaluated on purely linguistic tasks (POS tagging, parsing, NER) and relatively simple classification tasks such as sentiment classification or article classification from a single news source. As an alternative, we present CZEch~NEws~Classification~dataset (CZE-NEC), one of the largest Czech classification datasets, composed of news articles from various sources spanning over twenty years, which allows a more rigorous evaluation of such models. We define four classification tasks: news source, news category, inferred author's gender, and day of the week. To verify the task difficulty, we conducted a human evaluation, which revealed that human performance lags behind strong machine-learning baselines built upon pre-trained transformer models. Furthermore, we show that language-specific pre-trained encoder analysis outperforms selected commercially available large-scale generative language models.","Univerzita Karlova, Czech Republic","nlp/corpus-construction, nlp/text-classification, ir/information-extraction, news-classification","We create the CZE-NEC by crawling Czech news websites from CommonCrawl (§ 2.1) and use the available metadata to define classification tasks (§ 2.3). [...] We have collected the news stories text from the following six Czech online news providers: SeznamZprávy.cz, iRozhlas.cz, Novinky.cz, Deník.cz, iDnes.cz, and Aktuálně.cz. Instead of crawling the pages directly, we used the CommonCrawl archive to extract the articles.","","","","" "Hynek Kydlíček – Univerzita Karlova, Czech Republic","Implicit information extraction from news stories","http://hdl.handle.net/20.500.11956/183054","papers","20230101Z00:00:00","","","Univerzita Karlova, Czech Republic","nlp/corpus-construction, nlp/text-classification, ir/information-extraction, news-classification","We used Common Crawl² [²https://commoncrawl.org/] as a data source, as crawling live websites would be infeasible. For extraction, we developed a custom tool C’monCrawl³ [³https://github.com/hynky1999/CmonCrawl], which allows end-to-end extraction of Common Crawl data. We then deployed it in distributed setting on Artificial Intelligence Cluster (AIC)⁴ [⁴https://aic.ufal.mff.cuni.cz/], processed 49.2M URLs and extracted 3.2M articles.","","","","" "Matyas Bohacek, Michal Bravansky, Filip Trhlík, Václav Moravec – Faculty of Social Sciences, Charles University, Prague, Czech Republic; Gymnasium of Johannes Kepler, Prague, Czech Republic; University College London, United Kingdom","Czech-ing the News: Article Trustworthiness Dataset for Czech","https://aclanthology.org/2023.wassa-1.10/","papers","20230101Z00:00:00","","We present the Verifee dataset: a multimodal dataset of news articles with fine-grained trustworthiness annotations. We bring a diverse set of researchers from social, media, and computer sciences aboard to study this interdisciplinary problem holistically and develop a detailed methodology that assesses the texts through the lens of editorial transparency, journalist conventions, and objective reporting while penalizing manipulative techniques. We collect over 10,000 annotated articles from 60 Czech online news sources. Each item is categorized into one of the 4 proposed classes on the credibility spectrum {--} ranging from entirely trustworthy articles to deceptive ones {--} and annotated of its manipulative attributes. We fine-tune prominent sequence-to-sequence language models for the trustworthiness classification task on our dataset and report the best F-1 score of 0.53. We open-source the dataset, annotation methodology, and annotators{'} instructions in full length at https://www.verifee.ai/research/ to enable easy build-up work.","Faculty of Social Sciences, Charles University, Prague, Czech Republic; Gymnasium of Johannes Kepler, Prague, Czech Republic; University College London, United Kingdom","nlp/corpus-construction, nlp/fake-news-detection, news-classification","Initially, we assembled nearly 94, 000 articles by scraping URLs of 60 Czech news sources² obtained from Common Crawl³. These sources included mainstream journalistic websites, tabloids, independent news outlets, and websites that are part of the disinformation ecosystem (Štˇetka et al., 2021), capturing the full scope of journalistic content in the Czech Republic. [...] We applied multiple filters and balancing mechanisms based on text length and topics to mitigate deficiencies caused by inherent flaws in Common Crawl, which reduced the dataset’s size from 94, 000 to 10, 197 items. This way, we also ensured that the data is as representative of the Czech news ecosystem and as diverse as possible.","","","","" "Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jingyuan Wang, Jian-Yun Nie, Ji-Rong Wen – Gaoling School of Artificial Intelligence, Renmin University of China, China; School of Information, Renmin University of China, China; DIRO, Université de Montréal, Canada; School of Computer Science and Engineering, Beihang University, China; Beijing Key Laboratory of Big Data Management and Analysis Methods, China","The Web Can Be Your Oyster for Improving Language Models","https://aclanthology.org/2023.findings-acl.46.pdf","papers","20230101Z00:00:00","","Pretrained language models (PLMs) encode a large amount of world knowledge. However, as such knowledge is frozen at the time of model training, the models become static and limited by the training data at that time. In order to further improve the capacity of PLMs for knowledge-intensive tasks, we consider augmenting PLMs with the large-scale web using search engine. Unlike previous augmentation sources (e.g., Wikipedia data dump), the web provides broader, more comprehensive and constantly updated information. In this paper, we present a web-augmented PLM – UniWeb, which is trained over 16 knowledge-intensive tasks in a unified text-to-text format. Instead of simply using the retrieved contents from web, our approach has made two major improvements. Firstly, we propose an adaptive search engine assisted learning method that can self-evaluate the confidence level of PLM’s predictions, and adaptively determine when to refer to the web for more data, which can avoid useless or noisy augmentation from web. Secondly, we design a pretraining task, i.e., continual knowledge learning, based on salient spans prediction, to reduce the discrepancy between the encoded and retrieved knowledge. Experiments on a wide range of knowledge-intensive tasks show that our model significantly outperforms previous retrieval-augmented methods.","Gaoling School of Artificial Intelligence, Renmin University of China, China; School of Information, Renmin University of China, China; DIRO, Université de Montréal, Canada; School of Computer Science and Engineering, Beihang University, China; Beijing Key Laboratory of Big Data Management and Analysis Methods, China","nlp/large-language-models,","[...] we select the CCNet snapshot corresponding to the August 2019 Common Crawl snapshot which covers a wide range of 134M web documents and finally yields 906M passages of 100 tokens. CCNet processes Common Crawl through deduplication, language identification and quality filtering based on perplexity calculated by a lan- guage model.","","","CCNet","" "Thuat Nguyen, Chien Van Nguyen, Viet Dac Lai, Hieu Man, Nghia Trung Ngo, Franck Dernoncourt, Ryan A. Rossi, Thien Huu Nguyen – University of Oregon, USA; Adobe Research, USA","CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages","https://arxiv.org/pdf/2309.09400.pdf","papers","20230101Z00:00:00","","The driving factors behind the development of large language models (LLMs) with impressive learning capabilities are their colossal model sizes and extensive training datasets. Along with the progress in natural language processing, LLMs have been frequently made accessible to the public to foster deeper investigation and applications. However, when it comes to training datasets for these LLMs, especially the recent state-of-the-art models, they are often not fully disclosed. Creating training data for high-performing LLMs involves extensive cleaning and deduplication to ensure the necessary level of quality. The lack of transparency for training data has thus hampered research on attributing and addressing hallucination and bias issues in LLMs, hindering replication efforts and further advancements in the community. These challenges become even more pronounced in multilingual learning scenarios, where the available multilingual text datasets are often inadequately collected and cleaned. Consequently, there is a lack of open-source and readily usable dataset to effectively train LLMs in multiple languages. To overcome this issue, we present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for LLM development. Our dataset undergoes meticulous cleaning and deduplication through a rigorous pipeline of multiple stages to accomplish the best quality for model training, including language identification, URL-based filtering, metric-based cleaning, document refinement, and data deduplication. CulturaX is fully released to the public in HuggingFace to facilitate research and advancements in multilingual LLMs: [https://huggingface.co/datasets/uonlp/CulturaX]","University of Oregon, USA; Adobe Research, USA","nlp/corpus-construction, dataset-creation, nlp/large-language-models","","","CulturaX","Tensorflow-C4-Multilingual, OSCAR","" "Sneha Kudugunta, Isaac Caswell, Biao Zhang, Xavier Garcia, Christopher A. Choquette-Choo, Katherine Lee, Derrick Xin, Aditya Kusupati, Romi Stella, Ankur Bapna, Orhan Firat – Google DeepMind; Google Research","MADLAD-400: A Multilingual And Document-Level Large Audited Dataset","https://arxiv.org/pdf/2309.04662.pdf","papers","20230101Z00:00:00","","We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models 1 available to the research community.","Google DeepMind; Google Research","nlp/corpus-construction, dataset-creation, nlp/large-language-models","A common approach to creating such datasets is to mine language specific data from general web crawls such as CommonCrawl [57, 43, 68] to create datasets. We simply take this approach and scale it. We train a document-level LangID model on 498 languages to obtain CommonCrawl annotations at a document level and obtain a 5-trillion token, document-level monolingual dataset. [...] First, we collect as large a dataset of unlabeled web text as possible. More specifically, we use all available snapshots of CommonCrawl2 as of August 20, 2022. After some preliminary data cleaning, we use a highly multilingual LangID model to provide document-level annotations (Section 2.2). Finally, we conduct a self-audit (Section 2.4), or quality review, of this preliminary dataset partitioned by language, and design filters to remove noisy content. When appropriate, we correct language names and remove languages from the preliminary dataset. We note that building MADLAD-400 was an iterative process, and that while we describe one major quality review in depth, we conducted several stages of filtering.","","MADLAD-400","","" "Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, Jimmy Ba – University of Toronto, Canada; University of Cambridge, United Kingdom; Princeton University, USA","OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text","https://arxiv.org/abs/2310.06786","papers","20230101Z00:00:00","","There is growing evidence that pretraining on high quality, carefully thought-out tokens such as code or mathematics plays an important role in improving the reasoning abilities of large language models. For example, Minerva, a PaLM model finetuned on billions of tokens of mathematical documents from arXiv and the web, reported dramatically improved performance on problems that require quantitative reasoning. However, because all known open source web datasets employ preprocessing that does not faithfully preserve mathematical notation, the benefits of large scale training on quantitive web documents are unavailable to the research community. We introduce OpenWebMath, an open dataset inspired by these works containing 14.7B tokens of mathematical webpages from Common Crawl. We describe in detail our method for extracting text and LaTeX content and removing boilerplate from HTML documents, as well as our methods for quality filtering and deduplication. Additionally, we run small-scale experiments by training 1.4B parameter language models on OpenWebMath, showing that models trained on 14.7B tokens of our dataset surpass the performance of models trained on over 20x the amount of general language data. We hope that our dataset, openly released on the Hugging Face Hub, will help spur advances in the reasoning abilities of large language models.","University of Toronto, Canada; University of Cambridge, United Kingdom; Princeton University, USA","mathematics, mathematical text, nlp/corpus-construction, dataset-creation, nlp/large-language-models","We extract documents from Common Crawl¹ [¹ https://commoncrawl.org/], applying our pipeline to extract text while preserving mathematical content in the form of LATEX equations. We then filter the documents, ensuring that only high-quality English mathematical documents are kept. Finally, we deduplicate the dataset, resulting in 14.7B tokens of high-quality mathematical content suitable for both pretraining and finetuning large language models.","","OpenWebMath","","" "Minh-Hoang Dang, Alban Gaignard, Hala Skaf-Molli, Pascal Molli – Nantes Université, France","Schema.org: How is it used?","https://hal.science/hal-04250523/document","papers","20230101Z00:00:00","","Schema.org defines a shared vocabulary for semantically annotating web pages. Due to the vast and diverse nature of the contributed annotations, it is not easy to understand the widespread use of Schema.org. In this poster, we rely on the characteristic sets computed from the web data commons datasets to provide insights into property combinations on various websites. Thanks to in-depth experiments, this poster establishes a comprehensive observatory for schema.org annotations, visually presenting the most frequently used classes, commonly used combinations of properties per class, the average number of filled properties per class, and the classes with the greatest property coverage. These findings are valuable for both the communities involved in defining Schema.org vocabularies and the users of these vocabularies.","Nantes Université, France","semantic web, linked data","The Web Data Commons [3, 4] project extracts semantic annotations from the Common Crawl annually since 20102. It provides a reference dataset to study the evolution and adoption of semantic annotations in web pages. The extracted data is represented with RDF quads, which consist of RDF statements along with the URL of the corresponding web page. The abundance of annotations on the web and the diversity of contributors raise challenges in understanding how Schema.org is used at the web-scale. [...] We used the JSON-LD (most common formats) dataset from the WebDataCommons [3 ] released in October 2021. This dataset is derived from crawling 35 million websites, of which 42% utilized Web Entities. It comprises 82 billion RDF quads (16 terabytes uncompressed) and 6.7 billion Schema.org entities.","","","WDC-triples","" "Qi Yan, Raihan Seraj, Jiawei He, Lili Meng, Tristan Sylvain – University of British Columbia, Canada; McGill University, Canada; Borealis AI","AutoCast++: Enhancing World Event Prediction with Zero-shot Ranking-based Context Retrieval","https://arxiv.org/pdf/2310.01880.pdf","papers","20230101Z00:00:00","","Machine-based prediction of real-world events is garnering attention due to its potential for informed decision-making. Whereas traditional forecasting predominantly hinges on structured data like time-series, recent breakthroughs in language models enable predictions using unstructured text. In particular, (Zou et al., 2022) unveils AutoCast, a new benchmark that employs news articles for answering forecasting queries. Nevertheless, existing methods still trail behind human performance. The cornerstone of accurate forecasting, we argue, lies in identifying a concise, yet rich subset of news snippets from a vast corpus. With this motivation, we introduce AutoCast++, a zero-shot ranking-based context retrieval system, tailored to sift through expansive news document collections for event forecasting. Our approach first re-ranks articles based on zero-shot question-passage relevance, honing in on semantically pertinent news. Following this, the chosen articles are subjected to zero-shot summarization to attain succinct context. Leveraging a pre-trained language model, we conduct both the relevance evaluation and article summarization without needing domain-specific training. Notably, recent articles can sometimes be at odds with preceding ones due to new facts or unanticipated incidents, leading to fluctuating temporal dynamics. To tackle this, our re-ranking mechanism gives preference to more recent articles, and we further regularize the multi-passage representation learning to align with human forecaster responses made on different dates. Empirical results underscore marked improvements across multiple metrics, improving the performance for multiple-choice questions (MCQ) by 48% and true/false (TF) questions by up to 8%.","University of British Columbia, Canada; McGill University, Canada; Borealis AI","information retrieval, event detection, nlp/large-language-models, news","We incorporate news articles from the Common Crawl corpus¹ [¹Common Crawl - Open Repository of Web Crawl Data, https://commoncrawl.org/] spanning 2016 to 2022 for retrieval purpose.","","","","" "Gus Eggert, Kevin Huo, Mike Biven, Justin Waugh – Approximate Labs, Boulder, USA","TabLib: A Dataset of 627M Tables with Context","https://arxiv.org/pdf/2310.07875.pdf","papers","20230101Z00:00:00","","It is well-established that large, diverse datasets play a pivotal role in the performance of modern AI systems for text and image modalities. However, there are no datasets for tabular data of comparable size and diversity to those available for text and images. Thus we present {""}TabLib'', a compilation of 627 million tables totaling 69 TiB, along with 867B tokens of context. TabLib was extracted from numerous file formats, including CSV, HTML, SQLite, PDF, Excel, and others, sourced from GitHub and Common Crawl. The size and diversity of TabLib offer considerable promise in the table modality, reminiscent of the original promise of foundational datasets for text and images, such as The Pile and LAION.","Approximate Labs, Boulder, USA","dataset creation, web tables","We used the latest crawl at the time, which was CC-MAIN-2023-23. Common Crawl results are serialized using the WARC format, which includes “request” and “response” records. We only considered response records. We discarded “truncated” responses which had response lengths that exceed Common Crawl’s limit. If a WARC-Identified-Payload- Type record header was included in the record, then we used its mimetype as a hint for detecting the content type, otherwise we used the Content-Type header in the HTTP response, and followed a similar approach as GitHub (use the mimetype if possible, otherwise use libmagic). About 20% of WARC files were dropped due to issues parsing certain HTML elements with Pandas.","","","","" "Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou – Alibaba Group","Data-Juicer: A One-Stop Data Processing System for Large Language Models","https://arxiv.org/pdf/2309.02033.pdf","papers","20230101Z00:00:00","","The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.","Alibaba Group","dataset creation, nlp/corpus-construction, nlp/large-language-models","","","","","" "Wang Tongjing, Evert Meijers, Ziyu Bao, Huijuan Wang – Utrecht University, The Netherlands; Delft University of Technology, The Netherlands","Intercity networks and urban performance: a geographical text mining approach","https://www.tandfonline.com/doi/pdf/10.1080/12265934.2023.2253193","papers","20230101Z00:00:00","","Compared to the burgeoning literature discussing the importance of agglomeration externalities for development, limited attention has been given to network externalities. This is largely due to limited data availability. We propose a general measure to proxy city network externalities based on toponym co-occurrences that indicate the relatedness between cities. This paper extracts intercity relationships based on the co-occurrence of Chinese place names on 2.5 billion webpages. We calculate and map absolute and relative network positions, which we use to explain urban labour productivity. We found that a stronger embeddedness in networks of cities is significantly and positively associated with urban productivity. Smaller cities benefit comparatively more from being well embedded in city networks, suggesting that these relations can compensate for a lack of agglomeration externalities. We also compare the importance for urban performance of city network externalities vis-à-vis agglomeration externalities. City network externalities turn out to be more important in explaining urban performance than agglomeration externalities. This calls for new theorizing on a relational approach to urban and regional development. Rather than stimulating further concentration of urbanization, our findings suggest that fostering relationships between cities is a viable alternative urban development strategy. We conclude with suggestions for a research agenda that delves deeper into city network externalities.","Utrecht University, The Netherlands; Delft University of Technology, The Netherlands","geography, web-mining, dataset creation","[...] like Meijers and Peris we prefer using corpora from the CommonCrawl Archive of webpages as our text corpus. We used the entire April 2019 database for processing and conducting experiments. The original database we extracted contains about 6.98 TB of uncompressed text containing 2.5 billion web pages crawled between 18 and 26 April 2019. We selected all pages using at least 10 Chinese characters. The filtered corpus contains approximately 110 billion Chinese words on 91 million pages from 1067 different domains. Over 91% of the tokens are from websites registered under the four top-level domains (TLD): .com (62.23%), .cn (14.80%), .net (7.86%) and .org (2.68%). The four TLDs make up about 87.57% of pages.","","","","" "Mikko Aulamo, Nikolay Bogoychev, Shaoxiong Ji, Graeme Nail, Gema Ramírez-Sánchez, Jörg Tiedemann, Jelmer Van Der Linde, Jaume Zaragoza – University of Helsinki, ; University of Edinburgh, United Kingdom; Prompsit Language Engineering","HPLT: High Performance Language Technologies","https://aclanthology.org/2023.eamt-1.61.pdf","papers","20230101Z00:00:00","","We describe the High Performance Language Technologies project (HPLT), a 3-year EU-funded project started in September 2022. HPLT will build a space combining petabytes of natural language data with large-scale model training. It will derive monolingual and bilingual datasets from the Internet Archive and CommonCrawl and build efficient and solid machine translation (MT) as well as large language models (LLMs). HPLT aims at providing free, sustainable and reusable datasets, models and workflows at scale using high-performance computing (HPC).","University of Helsinki, ; University of Edinburgh, United Kingdom; Prompsit Language Engineering","nlp/corpus-construction, nlp/large-language-models","Datasets: Starting from 7 PB of web-crawled data from the Internet Archive3 and 5 from CommonCrawl,4 we will derive monolingual and bilin- gual datasets for systematic LLM and MT building with a large language coverage.","","","","" "Raffaele Sommese, Roland van Rijswijk-Deij, Mattijs Jonker – University of Twente, The Netherlands","This Is a Local Domain: On Amassing Country-Code Top-Level Domains from Public Data","https://arxiv.org/pdf/2309.01441.pdf","papers","20230101Z00:00:00","","Domain lists are a key ingredient for representative censuses of the Web. Unfortunately, such censuses typically lack a view on domains under country-code top-level domains (ccTLDs). This introduces unwanted bias: many countries have a rich local Web that remains hidden if their ccTLDs are not considered. The reason ccTLDs are rarely considered is that gaining access -- if possible at all -- is often laborious. To tackle this, we ask: what can we learn about ccTLDs from public sources? We extract domain names under ccTLDs from 6 years of public data from Certificate Transparency logs and Common Crawl. We compare this against ground truth for 19 ccTLDs for which we have the full DNS zone. We find that public data covers 43%-80% of these ccTLDs, and that coverage grows over time. By also comparing port scan data we then show that these public sources reveal a significant part of the Web presence under a ccTLD. We conclude that in the absence of full access to ccTLDs, domain names learned from public sources can be a good proxy when performing Web censuses.","University of Twente, The Netherlands","dataset creation, internet domain names, ccTLDs, country-code top-level domains","Common Crawl – Common Crawl is a nonprofit organization that builds and maintains a sizable, open repository of Web crawl data, offering years and petabytes of Web page data. The Common Crawl data lives in Amazon S3 as part of Amazon’s Open Data Sponsorship Program and is free for anyone to access. Crawls are seeded from a set of candidate domain names and the crawler follows links leading to other pages. Crawls are performed approximately every one to two months and contain raw Web page data, metadata and text extractions, among others. Relevant to our work, crawls accumulate many tens of millions of registered domain names that one can extract from the so-called URL index. [...] For Common Crawl we consider data for crawl snapshots dated between June 2017 and June 2023 (inclusive). There are 58 such snapshots, collectively accounting for 127 million registered domain names. The combined total number of unique registered domain names in our consolidated dataset is 430 million.","","","","" "Isaac Caswell, Lisa Wang, Isabel Papadimitriou – Google Research; Google DeepMind; Computer Science Department, Stanford University","Separating the Wheat from the Chaff with BREAD: An open-source benchmark and metrics to detect redundancy in text","https://arxiv.org/abs/2311.06440","papers","20230101Z00:00:00","","Data quality is a problem that perpetually resurfaces throughout the field of NLP, regardless of task, domain, or architecture, and remains especially severe for lower-resource languages. A typical and insidious issue, affecting both training data and model output, is data that is repetitive and dominated by linguistically uninteresting boilerplate, such as price catalogs or computer-generated log files. Though this problem permeates many web-scraped corpora, there has yet to be a benchmark to test against, or a systematic study to find simple metrics that generalize across languages and agree with human judgements of data quality. In the present work, we create and release BREAD, a human-labeled benchmark on repetitive boilerplate vs. plausible linguistic content, spanning 360 languages. We release several baseline CRED (Character REDundancy) scores along with it, and evaluate their effectiveness on BREAD. We hope that the community will use this resource to develop better filtering methods, and that our reference implementations of CRED scores can become standard corpus evaluation tools, driving the development of cleaner language modeling corpora, especially in low-resource languages.","Google Research; Google DeepMind; Computer Science Department, Stanford University","nlp/corpus-construction, data quality, nlp/boilerplate-removal, redundancy","BREAD consists of randomly-chosen documents from the multilingual, common-crawl- based MADLAD-400 dataset (Kudugunta et al., 2023), which are then annotated by expert NLP- practitioner annotators.","","","MADLAD-400","" "Sneha Kudugunta, Isaac Rayburn Caswell, Biao Zhang, Xavier Garcia, Derrick Xin, Aditya Kusupati, Romi Stella, Ankur Bapna, Orhan Firat – Google DeepMind; Google Research","MADLAD-400: A Multilingual And Document-Level Large Audited Dataset","https://openreview.net/forum?id=Y45ZCxslFx","papers","20230101Z00:00:00","","We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community.","Google DeepMind; Google Research","nlp/corpus-construction, nlp/multi-lingual-corpus","A common approach to creating such datasets is to mine language specific data from general web crawls such as CommonCrawl [52, 38, 63] to create datasets. We simply take this approach and scale it. We train a document-level LangID model on 498 languages to obtain CommonCrawl annotations at a document level and obtain a 5-trillion token, document-level monolingual dataset.¶ [...] First, we collect as large a dataset of unlabeled web text as possible. More specifically, we use all available snapshots of CommonCrawl3 as of August 20, 2022. After some preliminary data cleaning, we use a highly multilingual LangID model to provide document-level annotations (Section 2.2). Finally, we conduct a self-audit (Section 2.4), or quality review, of this preliminary dataset partitioned by language, and design filters to remove noisy content. When appropriate, we correct language names and remove languages from the preliminary dataset. We note that building MADLAD-400 was an iterative process, and that while we describe one major quality review in depth, we conducted several stages of filtering.","","MADLAD-400","","" "Xian Gong, Paul X. Mccarthy, Marian-Andrei Rizoiu, Paolo Boldi – University of Technology, Australia; University of New South Wales, Australia; Università degli Studi di Milano, Italy","Harmony in the Australian Domain Space","https://doi.org/10.1145/3614419.3643998","papers","20240101Z00:00:00","","In this paper we use for the first time a systematic approach in the study of harmonic centrality at a Web domain level, and gather a number of significant new findings about the Australian web. In particular, we explore the relationship between economic diversity at the firm level and the structure of the Web within the Australian domain space, using harmonic centrality as the main structural feature. The distribution of harmonic centrality values is analyzed over time, and we find that the distributions exhibit a consistent pattern across the different years. The observed distribution is well captured by a partition of the domain space into six clusters; the temporal movement of domain names across these six positions yields insights into the Australian Domain Space and exhibits correlations with other non-structural characteristics. From a more global perspective, we find a significant correlation between the median harmonic centrality of all domains in each OECD country and one measure of global trust, the WJP Rule of Law Index. Further investigation demonstrates that 35 countries in OECD share similar harmonic centrality distributions. The observed homogeneity in distribution presents a compelling avenue for exploration, potentially unveiling critical corporate, regional, or national insights.","University of Technology, Australia; University of New South Wales, Australia; Università degli Studi di Milano, Italy","","There are many public collections of web crawls, but one that is known for being very reliable and quite wide in scope is the Common Crawl1. Common Crawl’s measurements are preferred for web and network analysis due to their extensive coverage, regular updates, and large-scale, publicly accessible datasets, which reduces the need for resource-intensive data collection and is applicable across various research in a reproducible way. [...]","","","","" "Peter Carragher, Evan M. Williams, Kathleen M. Carley – Carnegie Mellon University, USA","Misinformation Resilient Search Rankings with Webgraph-based Interventions","https://doi.org/10.1145/3670410","papers","20240101Z00:00:00","search engine optimization, misinformation, website reliability, pagerank","The proliferation of unreliable news domains on the internet has had wide-reaching negative impacts on society. We introduce and evaluate interventions aimed at reducing traffic to unreliable news domains from search engines while maintaining traffic to reliable domains. We build these interventions on the principles of fairness (penalize sites for what is in their control), generality (label/fact-check agnostic), targeted (increase the cost of adversarial behavior), and scalability (works at webscale). We refine our methods on small-scale webdata as a testbed and then generalize the interventions to a large-scale webgraph containing 93.9M domains and 1.6B edges. We demonstrate that our methods penalize unreliable domains far more than reliable domains in both settings and we explore multiple avenues to mitigate unintended effects on both the small-scale and large-scale webgraph experiments. These results indicate the potential of our approach to reduce the spread of misinformation and foster a more reliable online information ecosystem. This research contributes to the development of targeted strategies to enhance the trustworthiness and quality of search engine results, ultimately benefiting users and the broader digital community.","Carnegie Mellon University, USA","web-science/hyperlinkgraph, misinformation, disinformation, domain-ranking","","","","","" "Tommaso Fontana, Sebastiano Vigna, Stefano Zacchiroli – Inria, DGDI, Paris, France; Università degli Studi di Milano, Dipartimento di Informatica, Milan, Italy; LTCI, Télécom Paris, Institut Polytechnique de Paris, Palaiseau, France","WebGraph: The Next Generation (Is in Rust)","https://doi.org/10.1145/3589335.3651581","papers","20240101Z00:00:00","","","Inria, DGDI, Paris, France; Università degli Studi di Milano, Dipartimento di Informatica, Milan, Italy; LTCI, Télécom Paris, Institut Polytechnique de Paris, Palaiseau, France","web-science/hyperlinkgraph; graph-processing; programming-languages/Java; programming-languages/Rust; cc-cited-not-used","Moreover, open data projects such as Common Crawl and Software Heritage (SWH) [5] have used WebGraph to compress and distribute their data.","","","","" "{Henry S} Thompson – The University of Edinburgh, Edinburgh, United Kingdom","Improved methodology for longitudinal Web analytics using Common Crawl","https://www.research.ed.ac.uk/en/publications/improved-methodology-for-longitudinal-web-analytics-using-common-","papers","20240101Z00:00:00","","Common Crawl is a multi-petabyte longitudinal dataset containing over 100 billion web pages which is widely used as a source of language data for sequence model training and in web science research. Each of its constituent archives is on the order of 75TB in size. Using it for research, particularly longitudinal studies, which necessarily involve multiple archives, is therefore very expensive in terms of compute time and storage space and/or web bandwidth. Two new methods for mitigating this problem are presented here, based on exploiting and extending the much smaller (<200 gigabytes (GB) compressed) index which is available for each archive. By adding Last-Modified timestamps to the index we enable longitudinal exploration using only a single archive. By comparing the distribution of index features for each of the 100 segments into which archive is divided with their distribution over the whole archive, we have identified the least and most representative segments for a number of recent archives. Using this allows the segment(s) that are most representative of an archive to be used as proxies for the whole. We illustrate this approach in an analysis of changes in URI length over time, leading to an unanticipated insight into the how the creation of Web pages has changed over time.","The University of Edinburgh, Edinburgh, United Kingdom","web-archiving, web-dataset","","","","",""