diff --git "a/commoncrawl_annotated.csv" "b/commoncrawl_annotated.csv" --- "a/commoncrawl_annotated.csv" +++ "b/commoncrawl_annotated.csv" @@ -1,9 +1,52 @@ "cc_project_author","post_title","cc_project_url","cc_project_category","post_date","keywords","abstract","cc_author_affiliation","cc_class","cc_snippet","cc_dataset_used","cc_derived_dataset_about","cc_derived_dataset_used","cc_derived_dataset_cited" "Ahad Rana – Common Crawl","Common Crawl – Building an open web-scale crawl using Hadoop","https://www.slideshare.net/hadoopusergroup/common-crawlpresentation","papers","20100101Z00:00:00","","","Common Crawl","web-crawling, big data, Hadoop","","","","","" +"Hannes Mühleisen, Christian Bizer – Freie Universität, Berlin, Germany","Web Data Commons – Extracting Structured Data from Two Large Web Corpora","http://ceur-ws.org/Vol-937/ldow2012-inv-paper-2.pdf","papers","20120101Z00:00:00","","","Freie Universität, Berlin, Germany","","","","","","" +"Alexandra Birch, Nadir Durrani, Phillip Koehn – School of Informatics, University of Edinburgh","Edinburgh SLT and MT System Description for the IWSLT 2013","http://workshop2013.iwslt.org/downloads/Edinburgh_SLT_and_MT_System_Description_for_the_IWSLT_2013_Evaluation.pdf","papers","20130101Z00:00:00","","","School of Informatics, University of Edinburgh","","","","","","" +"Jason R. Smith, Herve Saint-Amand, Magdalena Plamada, Phillipp Koehn, Chris Callison-Burch, Adam Lopez – Johns Hopkins University, University of Edinburgh, University of Zurich, University of Pennsylvania","Dirt Cheap Web-Scale Parallel Text from the Common Crawl","http://www.cs.jhu.edu/~ccb/publications/bitexts-from-common-crawl.pdf","papers","20130101Z00:00:00","","","Johns Hopkins University, University of Edinburgh, University of Zurich, University of Pennsylvania","","","","","","" +"Sara Stymne, Christian Hardmeier, Jorg Tiedemann, Joakim Nivre – Uppsala University: Department of Linguistics and Philology","Tunable Distortion Limits and Corpus Cleaning for SMT","http://statmt.org/wmt13/pdf/WMT29.pdf","papers","20130101Z00:00:00","","","Uppsala University: Department of Linguistics and Philology","","","","","","" +"Thanh-Le Ha, Teresa Herrmann, Jan Niehues, Mohammed Mediani, Eunah Cho, Yuqi Zhang, Isabel Slawik, Alex Waibel – Institute for Anthropomatics","The KIT Translation Systems for IWSLT 2013","http://workshop2013.iwslt.org/downloads/The_KIT_Translation_Systems_for_IWSLT_2013.pdf","papers","20130101Z00:00:00","","","Institute for Anthropomatics","","","","","","" +"Wanno Drijfhout, Oliver Jundt, Lesley Wevers, Djoerd Hiemstra – University of Twente","Traitor: Associating Concepts using the World Wide Web","http://doc.utwente.nl/88328/","papers","20130101Z00:00:00","","","University of Twente","","","","","","" +"Christian Bizer, Kai Eckert, Robert Meusel, Hannes Mühleisen, Michael Schuhmacher, Johanna Völker – Data and Web Science Group – University of Mannhein, Database Architectures Group, Centrum Wiskunde & Informatica, Netherlands","Deployment of RDFa, Microdata, and Microformats on the Web – A Quantitative Analysis","http://hannes.muehleisen.org/Bizer-etal-DeploymentRDFaMicrodataMicroformats-ISWC-InUse-2013.pdf","papers","20130101Z00:00:00","","","Data and Web Science Group – University of Mannhein, Database Architectures Group, Centrum Wiskunde & Informatica, Netherlands","","","","","","" "Jeffrey Pennington, Richard Socher, Christopher D. Manning – Stanford University, California, USA","GloVe: Global vectors for word representation","https://aclanthology.org/D14-1162.pdf","papers","20140101Z00:00:00","","","Stanford University, California, USA","nlp/word-embeddings","We trained our model on five corpora of varying sizes: [...] and on 42 billion tokens of web data, from Common Crawl⁵ [⁵ To demonstrate the scalability of the model, we also trained it on a much larger sixth corpus, containing 840 billion tokens of web data, but in this case we did not lowercase the vocabulary, so the results are not directly comparable.].","","","","" +"Mohammed Mediani, Joshua Winebarger, Alexander Waibel – Karlsruhe Institute of Technology, Germany","Improving In-Domain Data Selection For Small In-Domain Sets","http://www.statmt.org/OSMOSES/IWSLT-36.pdf","papers","20140101Z00:00:00","","","Karlsruhe Institute of Technology, Germany","","","","","","" +"Junfei Guo, Juan Liu, Qi Han, Andreas Maletti – School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany","A Tunable Language Model for Statistical Machine Translation","http://www.ims.uni-stuttgart.de/institut/mitarbeiter/maletti/pub/guoliuhanmal14.pdf","papers","20140101Z00:00:00","","","School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany","","","","","","" +"Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, Andrew Y. Ng – Baidu Research – Silicon Valley AI Lab","Deep Speech: Scaling up end-to-end speech recognition","http://arxiv.org/pdf/1412.5567v2.pdf","papers","20140101Z00:00:00","","","Baidu Research – Silicon Valley AI Lab","","","","","","" +"Eva Hasler, Philipp Koehn, Barry Haddow, Phil Blunsom – University of Edinburgh; University of Oxford","Dynamic Topic Adaptation for Phrase-based MT","http://www.aclweb.org/anthology/E/E14/E14-1035.pdf","papers","20140101Z00:00:00","","","University of Edinburgh; University of Oxford","","","","","","" +"Michele Tortelli – Politecnico di Bari","Bloom filter-based Routing in NDN","http://www.poliba.it/Didattica/docs/scorepoliba2014_submission_179.pdf","papers","20140101Z00:00:00","","","Politecnico di Bari","","","","","","" +"Filip Ginter, Jenna Kanerva – University of Turku","Fast Training of word2vec Representations Using N-gram Corpora","http://www2.lingfil.uu.se/SLTC2014/abstracts/sltc2014_submission_27.pdf","papers","20140101Z00:00:00","","","University of Turku","","","","","","" +"Petar Petrovski, Volha Bryl, Christian Bizer – University of Mannheim, Germany- Research Group Data and Web Science","Learning Regular Expressions for the Extraction of Product Attributes from E-commerce Microdata","http://ceur-ws.org/Vol-1267/LD4IE2014_Petrovski.pdf","papers","20140101Z00:00:00","","","University of Mannheim, Germany- Research Group Data and Web Science","","","","","","" +"Robert Meusel, Petar Petrovski, Christian Bizer – University of Mannheim, Germany- Research Group Data and Web Science","The Web Data Commons Microdata, RDFa and Microformat Dataset Series","http://link.springer.com/chapter/10.1007/978-3-319-11964-9_18#page-1","papers","20140101Z00:00:00","","","University of Mannheim, Germany- Research Group Data and Web Science","","","","","","" +"Robert Meusel, Peter Mika, Roi Blanko – University of Mannheim; Yahoo Labs- Barcelona","Focused Crawling for Structured Data","http://dl.acm.org/citation.cfm?id=2661902","papers","20140101Z00:00:00","","","University of Mannheim; Yahoo Labs- Barcelona","","","","","","" +"Chenchen Ding, Masao Utiyama, Eiichiro Sumita – National Institute of Information and Communications Technology Japan","Document-level Re-ranking with Soft Lexical and Semantic Features for Statistical Machine Translation","http://www.mibel.cs.tsukuba.ac.jp/~tei/AMTA2014.pdf","papers","20140101Z00:00:00","","","National Institute of Information and Communications Technology Japan","","","","","","" +"Masumi Shirakawa, Kotaro Nakayama, Eiji Aramaki, Takahiro Hara, Shojiro Nishio – Osaka University","Collecting Conceptualized Relations from Terabytes of Web Texts for Understanding Unknown Terms","http://dl.acm.org/citation.cfm?id=2682777","papers","20140101Z00:00:00","","","Osaka University","","","","","","" +"Jenna Kanerva, Juhani Luotolahti, Veronika Laippala, Filip Ginter – University of Turku","Syntactic N-gram Collection from a Large-Scale Corpus of Internet Finnish","http://ebooks.iospress.nl/volumearticle/38025","papers","20140101Z00:00:00","","","University of Turku","","","","","","" +"Willem Robert van Hage, Thomas Ploeger, Jesper Hoeksema – SynerScope B.V., VU University Amsterdam","Number frequency on the web","http://dl.acm.org/citation.cfm?id=2576962","papers","20140101Z00:00:00","","","SynerScope B.V., VU University Amsterdam","","","","","","" +"Christian Buck, Kenneth Heafield, Bas van Ooyen – University of Edinburgh, Stanford University, Owlin BV","N-gram Counts and Language Models from the Common Crawl","http://statmt.org/ngrams/BuckEtAl_LREC2014_CommonCrawlLM.pdf","papers","20140101Z00:00:00","","","University of Edinburgh, Stanford University, Owlin BV","","","","","","" +"Christian Hardmeier, Sara Stymne, Jörg Tiedemann, Aaron Smith, Joakim Nivre – Uppsala University: Department of Linguistics and Philology","Anaphora Models and Reordering for Phrase-Based SMT","http://acl2014.org/acl2014/W14-33/pdf/W14-3312.pdf","papers","20140101Z00:00:00","","","Uppsala University: Department of Linguistics and Philology","","","","","","" +"Lane O. B. Schwartz, Timothy Anderson, Jeremy Gwinnup, Katherine M. Young – Air Force Research Laboratory, SRA International, N-Space Analysis LLC","Machine Translation and Monolingual Postediting:The AFRL WMT-14 System","http://www.ling.uni-potsdam.de/~koller/aclpub/W14-33/cdrom/pdf/W14-3321.pdf","papers","20140101Z00:00:00","","","Air Force Research Laboratory, SRA International, N-Space Analysis LLC","","","","","","" +"Hoang Cuong, Khalil Sima’an – University of Amsterdam - Institute for Logic, Language and Computation","Latent Domain Translation Models in Mix-of-Domains Haystack","http://www.aclweb.org/anthology/C/C14/C14-1182.pdf","papers","20140101Z00:00:00","","","University of Amsterdam - Institute for Logic, Language and Computation","","","","","","" +"Thomas Steiner, Hannes Mühleisen, Ruben Verborgh, Pierre-Antoine Champin, Benoît Encelle, Yannick Prié – Université de Lyon, Database Architectures Group; Multimedia Lab, Ghent University; iMinds, Université de Nantes","Weaving the Web(VTT) of Data","http://telemedicina.unifesp.br/pub/Events/2013-05%20-%20WWW2013/www2013/www2013.org/companion/p1399.pdf","papers","20140101Z00:00:00","","","Université de Lyon, Database Architectures Group; Multimedia Lab, Ghent University; iMinds, Université de Nantes","","","","","","" +"Marcin Wylot, Philippe Cudré-Mauroux, Paul Groth – eXascale Infolab, University of Fribourg; VU University Amsterdam","TripleProv: Efficient Processing of Lineage Queries in a Native RDF Store","http://exascale.info/sites/default/files/TipleProv.pdf","papers","20140101Z00:00:00","","","eXascale Infolab, University of Fribourg; VU University Amsterdam","","","","","","" +"Robert Meusel, Sebastiano Vigna, Oliver Lehmberg, Christian Bizer – Data and Web Science Group - University of Mannheim, Laboratory for Web - Algorithmics Università degli Studi di Milano","Graph Structure in the Web — Revisited","http://vigna.di.unimi.it/ftp/papers/GraphStructureRevisited.pdf","papers","20140101Z00:00:00","","","Data and Web Science Group - University of Mannheim, Laboratory for Web - Algorithmics Università degli Studi di Milano","","","","","","" +"Calvin Ardi, John Heidemann – USC/Information Sciences Institute","Web-scale Content Reuse Detection","ftp://ftp.isi.edu/isi-pubs/tr-692.pdf","papers","20140101Z00:00:00","","","USC/Information Sciences Institute","","","","","","" +"Yuta Tsuboi – IBM Resarch","Neural Networks Leverage Corpus-wide Information for Part-of-speech Tagging","http://2boy.org/~yuta/publications/neuraltagger-emnlp2014-tsuboi.pdf","papers","20140101Z00:00:00","","","IBM Resarch","","","","","","" +"Mauro Cettolo, Nicola Bertoldi, Marcello Federico, Holger Schwenk, Loïc Barrault, Christophe Servan – Fondazione Bruno Kessler, University of Le Mans, Xerox Research Centre Europe","Translation project adaptation for MT-enhanced computer assisted translation","http://link.springer.com/article/10.1007/s10590-014-9152-1","papers","20140101Z00:00:00","","","Fondazione Bruno Kessler, University of Le Mans, Xerox Research Centre Europe","","","","","","" +"Germán Sanchis-Trilles, Daniel Ortiz-Martınez, Francisco Casacuberta – PRHLT Centre - Universidad Politécnica de Valencia","Efficient Wordgraph Pruning for Interactive Translation Prediction","http://www.casmacat.eu/uploads/Main/2eamt2014.pdf","papers","20140101Z00:00:00","","","PRHLT Centre - Universidad Politécnica de Valencia","","","","","","" +"Vasilis Kolias, Ioannis Anagnostopoulos, Eleftherios Kayafas – National Technical University of Athens, University of Thessaly","Exploratory Analysis of a Terabyte Scale Web Corpus","http://arxiv.org/abs/1409.5443","papers","20140101Z00:00:00","","","National Technical University of Athens, University of Thessaly","","","","","","" +"Masahiro Mizukami, Graham Neubig, Sakriani Sakti, Tomoki Toda, Satoshi Nakamura – Nara Institute of Science and Technology","Building a Free General-Domain Paraphrase Database for Japanese","http://isw3.naist.jp/~masahiro-mi/paper/ma14cocosda.pdf","papers","20140101Z00:00:00","","","Nara Institute of Science and Technology","","","","","","" "Robert Meusel, Sebastiano Vigna, Oliver Lehmberg, Christian Bizer – University of Mannheim, Germany; Università degli Studi di Milano, Italy","The Graph Structure in the Web – Analyzed on Different Aggregation Levels","https://pdfs.semanticscholar.org/b5d5/88298e6845b4bfd40ea779ce21e628239ef3.pdf","papers","20150101Z00:00:00","","","University of Mannheim, Germany; Università degli Studi di Milano, Italy","web-science/hyperlinkgraph","","","","","" "Alex Stolz, Martin Hepp – Universitaet der Bundeswehr Munich, Germany","Towards Crawling the Web for Structured Data: Pitfalls of Common Crawl for E-Commerce","http://ceur-ws.org/Vol-1426/paper-04.pdf","papers","20150101Z00:00:00","","","Universitaet der Bundeswehr Munich, Germany","nlp/corpus-representativeness, semantic web, microdata, e-commerce","","","","","" "Julian Eberius, Maik Thiele, Katrin Braunschweig, Wolfgang Lehner – Technische Universität Dresden, Germany","Top-k Entity Augmentation Using Consistent Set Covering","https://www.semanticscholar.org/paper/Top-k-entity-augmentation-using-consistent-set-Eberius-Thiele/a554fe7c49837e2d2d995e00fd3b62a6ca5650f2","papers","20150101Z00:00:00","","","Technische Universität Dresden, Germany","semantic web, web tables, web mining","To enable repeatability we publish the implementation², but also include the web table corpus used for the evaluation³. This corpus contains 100M Web tables extracted from a publicly available Web crawl⁴ [4: http://commoncrawl.org]","","{DresdenWebTableCorpus}","","" +"Matthew Malensek, Sangmi Lee Pallickara, Shrideep Pallickara – Colorado State University","Alleviation of Disk I/O Contention in Virtualized Settings for Data-Intensive Computing","http://galileo.cs.colostate.edu/papers/DiskInterference-BDC.pdf","papers","20150101Z00:00:00","","","Colorado State University","","","","","","" +"Titus Barik, Kevin Lubick, Justin Smith, John Slankas, Emerson Murphy-Hill – ABB Corporate Research and North Carolina State University","FUSE: A Reproducible, Extendable, Internet-scale Corpus of Spreadsheets","http://kjlubick.github.io/pubs/MSR2015-Fuse_spreadsheet_corpus.pdf","papers","20150101Z00:00:00","","","ABB Corporate Research and North Carolina State University","","","","","","" +"Joachim Daiber, Lautaro Quiroz, Roger Wechsler, Stella Frank – University of Amsterdam","Splitting Compounds by Semantic Analogy","https://ufal.mff.cuni.cz/~rosa/2015/docs/dmtw2015.pdf#page=26","papers","20150101Z00:00:00","","","University of Amsterdam","","","","","","" +"Mikhail Galkin, Dmitry Mouromtsev, Sören Auer – IMTO University- St. Petersburg, Russia, University of Bonn- Germany","Identifying Web Tables –Supporting a Neglected Type of Content on the Web","http://arxiv.org/pdf/1503.06598.pdf","papers","20150101Z00:00:00","","","IMTO University- St. Petersburg, Russia, University of Bonn- Germany","","","","","","" +"Brendan Juba – Washington University in St. Louis","Principled Sampling for Anomaly Detection","http://www.cse.wustl.edu/~bjuba/papers/anomaly_detection.pdf","papers","20150101Z00:00:00","","","Washington University in St. Louis","","","","","","" +"Kowalczuk Ewa, Jedrzej Potoniec, Agnieszka Ławrynowicz – Institute of Computing Science, Poznan University of Technology, Poland","Extracting Usage Patterns of Ontologies on the Web: a Case Study on GoodRelations Vocabulary in RDFa","http://ceur-ws.org/Vol-1265/owled2014_submission_14.pdf","papers","20150101Z00:00:00","","","Institute of Computing Science, Poznan University of Technology, Poland","","","","","","" +"Junfei Guo, Juan Liu, Qi Han, Andreas Maletti – School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany","A Tunable Language Model for Statistical Machine Translation","http://www.ims.uni-stuttgart.de/institut/mitarbeiter/maletti/pub/guoliuhanmal14.pdf","papers","20150101Z00:00:00","","","School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany","","","","","","" +"Kay Ousterhout, Ryan Rasti, Sylvia Ratnasamy, Scott Shenker, Byung-Gon Chun – UC Berkeley, ICSI, Vmware, Seoul National University","Making Sense of Performance in Data Analytics Frameworks","http://www.eecs.berkeley.edu/~keo/publications/nsdi15-final147.pdf","papers","20150101Z00:00:00","","","UC Berkeley, ICSI, Vmware, Seoul National University","","","","","","" +"Evan Jaffe, Lifeng Jin, David King, Marten van Schinjdel – Dept. of Linguistics, Ohio State University","Azmat: Sentence Similarity using Associative Matrices","http://www.ling.ohio-state.edu/~vanschm/resources/uploads/jaffe_etal-2015-semeval.pdf","papers","20150101Z00:00:00","","","Dept. of Linguistics, Ohio State University","","","","","","" +"Alexander A Alemi, Paul Ginsparg – Dept. of Physics, Cornell University, Dept. of Physics and Information Science, Cornell University","Text Segmentation based on Semantic Word Embeddings","http://arxiv.org/pdf/1503.05543.pdf","papers","20150101Z00:00:00","","","Dept. of Physics, Cornell University, Dept. of Physics and Information Science, Cornell University","","","","","","" "Ivan Habernal, Omnia Zayed, Iryna Gurevych – University of Darmstadt, Germany","C4Corpus: Multilingual Web-Size Corpus with Free License","http://www.lrec-conf.org/proceedings/lrec2016/pdf/388_Paper.pdf","papers","20160101Z00:00:00","","Large Web corpora containing full documents with permissive licenses are crucial for many NLP tasks. In this article we present the construction of 12 million-pages Web corpus (over 10 billion tokens) licensed under CreativeCommons license family in 50+ languages that has been extracted from CommonCrawl, the largest publicly available general Web crawl to date with about 2 billion crawled URLs. Our highly-scalable Hadoop-based framework is able to process the full CommonCrawl corpus on 2000+ CPU cluster on the Amazon Elastic Map/Reduce infrastructure. The processing pipeline includes license identification, state-of-the-art boilerplate removal, exact duplicate and near-duplicate document removal, and language detection. The construction of the corpus is highly configurable and fully reproducible, and we provide both the framework (DKPro C4CorpusTools) and the resulting data (C4Corpus) to the research community.","University of Darmstadt, Germany","nlp/corpus-construction, legal/copyright, license/creative-commons, nlp/boilerplate-removal, ir/duplicate-detection","","CC-MAIN-2016-07","{DKPro-C4}","","" "Roland Schäfer – Freie Universität Berlin, Germany","CommonCOW: Massively Huge Web Corpora from CommonCrawl Data and a Method to Distribute them Freely under Restrictive EU Copyright Laws","http://rolandschaefer.net/?p=994","papers","20160101Z00:00:00","","In this paper, I describe a method of creating massively huge web corpora from the CommonCrawl data sets and redistributing the resulting annotations in a stand-off format. Current EU (and especially German) copyright legislation categorically forbids the redistribution of downloaded material without express prior permission by the authors. Therefore, stand-off annotations or other derivates are the only format in which European researchers (like myself) are allowed to re-distribute the respective corpora. In order to make the full corpora available to the public despite such restrictions, the stand-off format presented here allows anybody to locally reconstruct the full corpora with the least possible computational effort.","Freie Universität Berlin, Germany","nlp/corpus-construction, legal/copyright","","","{CommonCOW}","","" "Roland Schäfer – Freie Universität Berlin, Germany","Accurate and Efficient General-purpose Boilerplate Detection for Crawled Web Corpora","https://doi.org/10.1007/s10579-016-9359-2","papers","20170101Z00:00:00","Boilerplate, Corpus construction, Non-destructive corpus normalization, Web corpora","Removal of boilerplate is one of the essential tasks in web corpus construction and web indexing. Boilerplate (redundant and automatically inserted material like menus, copyright notices, navigational elements, etc.) is usually considered to be linguistically unattractive for inclusion in a web corpus. Also, search engines should not index such material because it can lead to spurious results for search terms if these terms appear in boilerplate regions of the web page. The size of large web corpora necessitates the use of efficient algorithms while a high accuracy directly improves the quality of the final corpus. In this paper, I present and evaluate a supervised machine learning approach to general-purpose boilerplate detection for languages based on Latin alphabets which is both very efficient and very accurate. Using a Multilayer Perceptron and a high number of carefully engineered features, I achieve between 95\% and 99\% correct classifications (depending on the input language) with precision and recall over 0.95. Since the perceptrons are trained on language-specific data, I also evaluate how well perceptrons trained on one language perform on other languages. The single features are also evaluated for the merit they contribute to the classification. I show that the accuracy of the Multilayer Perceptron is on a par with that of other classifiers such as Support Vector Machines. I conclude that the quality of general-purpose boilerplate detectors depends mainly on the availability of many well-engineered features and which are highly language-independent. The method has been implemented in the open-source texrex web page cleaning software, and large corpora constructed using it are available from the COW initiative, including the CommonCOW corpora created from CommonCrawl data sets.","Freie Universität Berlin, Germany","nlp/boilerplate-removal, nlp/web-as-corpus, nlp/corpus-construction","","","","","" @@ -311,7 +354,7 @@ "{NLLB Team}, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Jeff Wang – Meta AI; UC Berkeley, USA; Johns Hopkins University, USA","No Language Left Behind: Scaling Human-Centered Machine Translation","https://arxiv.org/abs/2207.04672","papers","20220101Z00:00:00","","Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system. Finally, we open source all contributions described in this work, accessible at https://github.com/facebookresearch/fairseq/tree/nllb.","Meta AI; UC Berkeley, USA; Johns Hopkins University, USA","nlp/corpus-construction, nlp/parallel-corpus, nlp/low-resource-language, nlp/language-identification","We begin with web data as our starting point, provided by CommonCrawl (CC)18 and ParaCrawl (Bañón et al., 2020).","","NLLB","","" "Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zhang, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, Bryan Catanzaro – Microsoft; NVIDIA","Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model","https://arxiv.org/abs/2201.11990","papers","20220101Z00:00:00","Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences","","Microsoft; NVIDIA","nlp/language-model","Resources such as Common Crawl (CC) provide snapshots of the web which can be utilized as a source of language data. While these data sources contain an enormous amount of language data, they also require carefully designed preprocessing steps in order to select data which is of reasonable quality. As prior work has found (e.g., [9]), the quality of unfiltered Common Crawl data is lower than that of curated datasets and steps should be taken to increase the average quality of data selected from Common Crawl for LM pretraining. [...] Common Crawl: As mentioned previously, Common Crawl comprises an immense amount of data. We chose to process two snapshots, 2020-50 and 2021-04, with the aim of acquiring around 150B tokens of training data. The first step of this process is language detection [11] and text extraction from the raw HTML included in the Common Crawl WARC files¹. Following the rationale presented in [11], we used the pycld2² and jusText³ libraries for these tasks. [...] In addition to Common Crawl data, we leveraged a number of other previously generated datasets. From The Pile, we selected Books3, OpenWebText2, Stack Exchange, PubMed Abstracts, Wikipedia, Gutenberg (PG-19), BookCorpus2, NIH ExPorter, and Pile-CC datasets. We also included the CC-Stories and RealNews datasets used to train Megatron [63].","","","","" "Tom Alby, Robert Jäschke – Humboldt-Universität zu Berlin, Berlin, Germany","Analyzing the Web: Are Top Websites Lists a Good Choice for Research?","https://link.springer.com/chapter/10.1007/978-3-031-16802-4_2","papers","20220101Z00:00:00","","The web has been a subject of research since its beginning, but it is difficult if not impossible to analyze the whole web, even if a database of all URLs would be freely accessible. Hundreds of studies have used commercial top websites lists as a shortcut, in particular the Alexa One Million Top Sites list. However, apart from the fact that Amazon decided to terminate Alexa, we question the usefulness of such lists for research as they have several shortcomings. Our analysis shows that top sites lists miss frequently visited websites and offer only little value for language-specific research. We present a heuristic-driven alternative based on the Common Crawl host-level web graph while also taking language-specific requirements into account.","Humboldt-Universität zu Berlin, Berlin, Germany","web-science, domain-ranking","","hyperlinkgraph/cc-main-2021-feb-apr-may/hostgraph","","","" -"Olexandra Belz – Ivan Franko National University of Lviv, Ukraine","Use of schema.org micro-markup in e-commerce projects","http://baltijapublishing.lv/index.php/threeseas/article/view/1964/1973","papers","20220101Z00:00:00","","The purpose of the article is to identify the most effective schema.org micro-markup schemes used in e-commerce projects. Methodology. The research included competitive intelligence among the leading online platforms operating in Europe in general and in Ukraine in particular. The study involved TOP-8 e-commerce projects in Ukraine and TOP-9 global cross-border marketplaces operating in Europe. The service validator.schema.org was chosen as the research tool. Results. The study showed that the most popular schema.org micro-markup format is JSON-LD. In general, 82.4% of the surveyed sites use JSON-LD microdata format. Some sites use two microdata formats: JSON-LD and Microdata. But none of the top online marketplaces use the RDFa micro-markup format. Popular marketplaces operating in Ukraine and Europe often use the same types of schema.org vocabulary. However, the frequency of using micro-markup by top marketplaces operating in Ukraine is much higher than the frequency of using micro-markup by top marketplaces operating in Europe. In addition, Ukrainian marketplaces use a much wider list of schema.org micro-markup properties than marketplaces operating in Europe. However, no online store has implemented the properties of advantages and disadvantages of goods recommended by Google in the scheme. Practical implications. The study suggests schema.org micro-markup schemes for homepage, category page, product page, about page, payment and delivery page, warranty and returns page, contact page and blog. The proposed templates of micro-markup schemes were validated using the validator.schema.org service. The study recommends using the JSON-LD format for semantic markup of website content. Value/originality. Implementation of effective semantic markup of site content will allow search engines to more accurately identify the information presented on the site. This, in turn, will improve the visibility of the online marketplace in the Search Engine Results Page of Google, Bing, Yahoo! etc.","Ivan Franko National University of Lviv, Ukraine","e-commerce, online marketplaces, linked data, schema.org annotations, SEO","Since 2008, the Common Crawl project has been crawling websites to collect web page data (extracting metadata and web page text). At the time of writing, the latest scan took place from November 26 to December 10, 2022. As a result of this scan, 3.35 billion web pages were processed and 420 petabytes of content were removed (Common Crawl, 2022). Both scientists and practitioners are working with the obtained data sets of the Common Crawl project.¶ On September 22, 2022, the Web Data Commons (WDC) project released the Schema.org Table Annotation Benchmark (SOTAB) for public download (Web Data Commons, 2022).","","","","" +"Olexandra Belz – Ivan Franko National University of Lviv, Ukraine","Use of schema.org micro-markup in e-commerce projects","http://baltijapublishing.lv/index.php/threeseas/article/view/1964/1973","papers","20220101Z00:00:00","","The purpose of the article is to identify the most effective schema.org micro-markup schemes used in e-commerce projects. Methodology. The research included competitive intelligence among the leading online platforms operating in Europe in general and in Ukraine in particular. The study involved TOP-8 e-commerce projects in Ukraine and TOP-9 global cross-border marketplaces operating in Europe. The service validator.schema.org was chosen as the research tool. Results. The study showed that the most popular schema.org micro-markup format is JSON-LD. In general, 82.4% of the surveyed sites use JSON-LD microdata format. Some sites use two microdata formats: JSON-LD and Microdata. But none of the top online marketplaces use the RDFa micro-markup format. Popular marketplaces operating in Ukraine and Europe often use the same types of schema.org vocabulary. However, the frequency of using micro-markup by top marketplaces operating in Ukraine is much higher than the frequency of using micro-markup by top marketplaces operating in Europe. In addition, Ukrainian marketplaces use a much wider list of schema.org micro-markup properties than marketplaces operating in Europe. However, no online store has implemented the properties of advantages and disadvantages of goods recommended by Google in the scheme. Practical implications. The study suggests schema.org micro-markup schemes for homepage, category page, product page, about page, payment and delivery page, warranty and returns page, contact page and blog. The proposed templates of micro-markup schemes were validated using the validator.schema.org service. The study recommends using the JSON-LD format for semantic markup of website content. Value/originality. Implementation of effective semantic markup of site content will allow search engines to more accurately identify the information presented on the site. This, in turn, will improve the visibility of the online marketplace in the Search Engine Results Page of Google, Bing, Yahoo! etc.","Ivan Franko National University of Lviv, Ukraine","e-commerce, online marketplaces, linked data, schema.org annotations, SEO","Since 2008, the Common Crawl project has been crawling websites to collect web page data (extracting metadata and web page text). At the time of writing, the latest scan took place from November 26 to December 10, 2022. As a result of this scan, 3.35 billion web pages were processed and 420 petabytes of content were removed (Common Crawl, 2022). Both scientists and practitioners are working with the obtained data sets of the Common Crawl project.¶ On September 22, 2022, the Web Data Commons (WDC) project released the Schema.org Table Annotation Benchmark (SOTAB) for public download (Web Data Commons, 2022).","","","WebDataCommons","" "Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, Saehoon Kim – Kakao Brain, South Korea","Coyo-700m: Image-text pair dataset","https://github.com/kakaobrain/coyo-dataset","papers","20220101Z00:00:00","","We collected about 10 billion pairs of alt-text and image source in HTML documents in Common Crawl from Oct. 2020 to Aug. 2021. and eliminated uninformative pairs through the image and text level filtering process with minimal cost. The following figure outlines our data collection procedure.","Kakao Brain, South Korea","nlp/multimodal-corpora","","five CommonCrawl dumps, ranging from 2017 to 2020","COYO-700M","","" "Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, Quoc Le – Google","LaMDA: Language Models for Dialog Applications","https://arxiv.org/abs/2201.08239","papers","20220101Z00:00:00","","","Google","nlp/language-model, nlp/transformer-language-model","E Pre-training data composition¶ The pre-training data, called Infiniset, is a combination of dialog data from public dialog data and other public web documents. It consists of 2.97B documents and 1.12B dialogs with 13.39B utterances. The composition of the data is as follows: 50% dialogs data from public forums; 12.5% C4 data [11]; 12.5% code documents from sites related to programming like Q&A sites, tutorials, etc; 12.5% Wikipedia (English); 6.25% English web documents; and 6.25% Non-English web documents. The total number of words in the dataset is 1.56T. Note that this composition was chosen to achieve a more robust performance on dialog tasks (Section 4) while still keeping its ability to perform other tasks like code generation. As future work, we can study how the choice of this composition may affect the quality of some of the other NLP tasks performed by the model.","","","Tensorflow-C4","" "Mark Edward Phillips, Sawood Alam – University of North Texas, USA; Internet Archive, USA","Moving the End of Term Web Archive to the Cloud to Encourage Research Use and Reuse","https://digital.library.unt.edu/ark:/67531/metadc1998717/m2/1/high_res_d/EOT_WADL_2022.pdf","papers","20220101Z00:00:00","","The End of Term Web (EOT) Archive is a collaborative project with a goal of collecting the United States federal web, loosely defined as .gov and .mil, every four years coinciding with presidential elections and often a transition in the Executive Branch of the government. In 2021 the End of Term team began to process the longitudinal web archive for EOT-2008, EOT-2012, EOT-2016, and EOT-2020 to move into the Amazon S3 storage service as part of the Amazon Open Data Program. This effort adopted tools, structures, and documentation developed by Common Crawl in an effort to maximize potential research access and reuse of existing tools and documentation. This paper presents the process of organizing, staging, processing, and moving these collections into the Amazon cloud.","University of North Texas, USA; Internet Archive, USA","web archive","","","","","" @@ -320,14 +363,14 @@ "Josh A. Goldstein, Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel, Katerina Sedova – Georgetown University’s Center for Security and Emerging Technology, USA; OpenAI; Stanford Internet Observatory, USA","Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations","https://arxiv.org/abs/2301.04246","papers","20230101Z00:00:00","Computers and Society (cs.CY), FOS: Computer and information sciences, FOS: Computer and information sciences","","Georgetown University’s Center for Security and Emerging Technology, USA; OpenAI; Stanford Internet Observatory, USA","nlp/generative-language-models, ai/ethics-of-machine-learning, cc-cited-not-used","While some of this data is typically taken from relatively structured sources such as Wikipedia, a large majority of data usually comes from tools like Common Crawl that scrape the web for publicly available text.¹⁴⁷ [147. CommonCrawl freely publishes its archives of web data. See “So you’re ready to get started.,” Common Crawl, accessed June 27, 2022, https://commoncrawl.org/the-data/get-started/. But anyone can build their own software for web scraping or use other tools to extract data from websites.]","","","","" "Xinyue Wang – Virginia Tech, USA","Large Web Archive Collection Infrastructure and Services","http://hdl.handle.net/10919/113345","papers","20230101Z00:00:00","","The web has evolved to be the primary carrier of human knowledge during the information age. The ephemeral nature of much web content makes web knowledge preservation vital in preserving human knowledge and memories. Web archives are created to preserve the current web and make it available for future reuse. In addition to its preservation purpose, web archive data is also used as a source for research and for lost information discovery. However, the reuse of web archive data is inherently challenging because of the scale of data size and requirements of big data tools to serve and analyze web archive data efficiently. In this research, we propose to build a web archive big data processing infrastructure that can support efficient and scalable web archive reuse like quantitative data analysis and browsing services. We adopt industry frameworks and tools to establish a platform that can provide high-performance computation for web archive initiatives and users. We propose to convert the standard web archive data file format to a columnar data format for efficient future reuse. Our experiments show that our proposed design can significantly improve quantitative data analysis tasks for common web archive data usage. Our design can also serve an efficient web browsing service without adopting a sophisticated web hosting architecture. In addition to the standard web archive data, we also integrate Twitter data into our design as a unique web archive resource. Twitter is a prominent source of data for researchers in a variety of fields and an integral element of the web's history. We aggregate the Twitter data from different sources and integrate it into the suggested design for reuse. We are able to greatly increase the processing performance of workloads around social media data by overcoming the data loading bottleneck with a web-archive-like Parquet data format.","Virginia Tech, USA","web-archiving, data formats, big data, data processing, WARC, Parquet, CDX","We use Common Crawl’s web archiving data crawled from May 20 to 23, 2018. The data set consists of 1219 Gzip compressed WARC files totaling 0.98 TB, and contains 53,324,440 records. The WARC files are organized by crawling time, each containing records crawled from a mutually exclusive time span. We then reformat the WARC files to yield the following five datasets for comparison: 1) the original WARC files; 2) case 1 plus CDX index files built against all the original WARC files; 3) Parquet files containing the same information as case 1, with most columns in String type; 4) the same as case 3 but the Timestamp column in INT64 Timestamp type; 5) Avro, [...]","","","","" "Petros Terzis – University College London, United Kingdom","Building Programmable Commons","https://osf.io/preprints/socarxiv/yuef5/","papers","20230101Z00:00:00","","","University College London, United Kingdom","digital-commons, public-commons, cc-cited-not-used","Programmable commons and the public value of programmability are thus introduced as parts of a broader political project that aspires to democratise access to, and management of these resources. By drawing on the history of a family of commons -namely intellectual commons, infrastructure commons, and global commons-, this paper explores the material form and impact of infocomputational technologies and presents a blend of bottom-up and top-down initiatives for their commons-based organisation and governance.","","","","" -"Hans W. A. Hanley, Deepak Kumar, Zakir Durumeric – Stanford University, USA","A Golden Age: Conspiracy Theories' Relationship with Misinformation Outlets, News Media, and the Wider Internet","https://arxiv.org/abs/2301.10880","papers","20230101Z00:00:00","","Do we live in a {""}Golden Age of Conspiracy Theories?{""} In the last few decades, conspiracy theories have proliferated on the Internet with some having dangerous real-world consequences. A large contingent of those who participated in the January 6th attack on the US Capitol believed fervently in the QAnon conspiracy theory. In this work, we study the relationships amongst five prominent conspiracy theories (QAnon, COVID, UFO/Aliens, 9-11, and Flat-Earth) and each of their respective relationships to the news media, both mainstream and fringe. Identifying and publishing a set of 755 different conspiracy theory websites dedicated to our five conspiracy theories, we find that each set often hyperlinks to the same external domains, with COVID and QAnon conspiracy theory websites largest amount of shared connections. Examining the role of news media, we further find that not only do outlets known for spreading misinformation hyperlink to our set of conspiracy theory websites more often than mainstream websites but this hyperlinking has increased dramatically between 2018 and 2021, with the advent of QAnon and the start of COVID-19 pandemic. Using partial Granger-causality, we uncover several positive correlative relationships between the hyperlinks from misinformation websites and the popularity of conspiracy theory websites, suggesting the prominent role that misinformation news outlets play in popularizing many conspiracy theories.","Stanford University, USA","nlp/fake-news-detection, misinformation, disinformation, conspiracy theories, webscience/hyperlinkgraph","Using our own web scrapes and pages historically scraped by Common Crawl,¹ [¹https://commoncrawl.org/] we then document the state and the changing behaviors of the conspiracy theory ecosystem and their relationship to a separate set of 530 known misinformation outlets, 565 authentic news websites, and 528 non-news websites. [...] Utilizing the Common Crawl harmonic and PageRank centrality measures that measure a website’s centrality across all of the crawled Internet, we then find many of the websites in our dataset have relatively high network centrality, suggesting that many of them are not peripheral on the Internet but actually near the Internet’s core/are mainstream. Indeed examining, the hyperlink connections between news media and these conspiracy theories, we find that many of them rely heavily on mainstream as well as misinformation outlets (compared to non-news websites) for their information, with many popular misinformation outlets also hyperlinking back to many of these conspiracy theory websites. [...] 4.1 Common Crawl Page Retrieval and Website Crawling To gather the set of hyperlinks between our websites, we utilize Common Crawl data [92]—widely considered the most complete publicly available source of web crawl data—and our own website crawls. For each website in our dataset, we collect all the domain’s HTML pages that were indexed by Common Crawl before August 2021. In addition to Common Crawl data, we further utilize our own website scrapes. We utilize our own crawls, in addition to Common Crawl, due to noisiness, missing pages, and missing domains within the Common Crawl dataset [85]. For example, 309 particularly small conspiracy theory domains were not contained within the Common Crawl dataset (i.e. these websites often only contained a few dozen pages). Thus for each website in our dataset, we further gather all the HTML pages 10 hops from each website’s homepage (i.e., we collect all URLs linked from the homepage (1st hop), then all URLs linked from the pages that were linked by the homepage (2nd hop), and so forth). For each HTML page from our scrapes and Common Crawl, we parse the HTML, detect the date that page was published, and collect hyperlinks to other pages (i.e., HTML tags). Altogether we gather the available Common Crawl pages and scrape the HTML for our 755 conspiracy theory, 530 misinformation, 565 authentic news, and 528 non-news websites. [...] Utilizing Common Crawl network data [ 61] over the indexed Internet (87.7 million websites), we thus determine the network centrality of our set of conspiracy-focused websites to understand if each conspiracy theory website category is “core” (regularly utilized on the Internet) or “peripheral”. We utilize centralities across Common Crawl’s dataset rather than our partial one in order to get a sense of each conspiracy theory’s centrality on the entire Internet. While only 446 of our conspiracy theory websites are within the Common Crawl dataset, this analysis allows us to fully understand the relative roles that each conspiracy theory website group in our dataset plays on the wider Internet.","","","","" +"Hans W. A. Hanley, Deepak Kumar, Zakir Durumeric – Stanford University, USA","A Golden Age: Conspiracy Theories' Relationship with Misinformation Outlets, News Media, and the Wider Internet","https://arxiv.org/abs/2301.10880","papers","20230101Z00:00:00","","Do we live in a {""}Golden Age of Conspiracy Theories?{""} In the last few decades, conspiracy theories have proliferated on the Internet with some having dangerous real-world consequences. A large contingent of those who participated in the January 6th attack on the US Capitol believed fervently in the QAnon conspiracy theory. In this work, we study the relationships amongst five prominent conspiracy theories (QAnon, COVID, UFO/Aliens, 9-11, and Flat-Earth) and each of their respective relationships to the news media, both mainstream and fringe. Identifying and publishing a set of 755 different conspiracy theory websites dedicated to our five conspiracy theories, we find that each set often hyperlinks to the same external domains, with COVID and QAnon conspiracy theory websites largest amount of shared connections. Examining the role of news media, we further find that not only do outlets known for spreading misinformation hyperlink to our set of conspiracy theory websites more often than mainstream websites but this hyperlinking has increased dramatically between 2018 and 2021, with the advent of QAnon and the start of COVID-19 pandemic. Using partial Granger-causality, we uncover several positive correlative relationships between the hyperlinks from misinformation websites and the popularity of conspiracy theory websites, suggesting the prominent role that misinformation news outlets play in popularizing many conspiracy theories.","Stanford University, USA","nlp/fake-news-detection, misinformation, disinformation, conspiracy theories, web-science/hyperlinkgraph","Using our own web scrapes and pages historically scraped by Common Crawl,¹ [¹https://commoncrawl.org/] we then document the state and the changing behaviors of the conspiracy theory ecosystem and their relationship to a separate set of 530 known misinformation outlets, 565 authentic news websites, and 528 non-news websites. [...] Utilizing the Common Crawl harmonic and PageRank centrality measures that measure a website’s centrality across all of the crawled Internet, we then find many of the websites in our dataset have relatively high network centrality, suggesting that many of them are not peripheral on the Internet but actually near the Internet’s core/are mainstream. Indeed examining, the hyperlink connections between news media and these conspiracy theories, we find that many of them rely heavily on mainstream as well as misinformation outlets (compared to non-news websites) for their information, with many popular misinformation outlets also hyperlinking back to many of these conspiracy theory websites. [...] 4.1 Common Crawl Page Retrieval and Website Crawling To gather the set of hyperlinks between our websites, we utilize Common Crawl data [92]—widely considered the most complete publicly available source of web crawl data—and our own website crawls. For each website in our dataset, we collect all the domain’s HTML pages that were indexed by Common Crawl before August 2021. In addition to Common Crawl data, we further utilize our own website scrapes. We utilize our own crawls, in addition to Common Crawl, due to noisiness, missing pages, and missing domains within the Common Crawl dataset [85]. For example, 309 particularly small conspiracy theory domains were not contained within the Common Crawl dataset (i.e. these websites often only contained a few dozen pages). Thus for each website in our dataset, we further gather all the HTML pages 10 hops from each website’s homepage (i.e., we collect all URLs linked from the homepage (1st hop), then all URLs linked from the pages that were linked by the homepage (2nd hop), and so forth). For each HTML page from our scrapes and Common Crawl, we parse the HTML, detect the date that page was published, and collect hyperlinks to other pages (i.e., HTML tags). Altogether we gather the available Common Crawl pages and scrape the HTML for our 755 conspiracy theory, 530 misinformation, 565 authentic news, and 528 non-news websites. [...] Utilizing Common Crawl network data [ 61] over the indexed Internet (87.7 million websites), we thus determine the network centrality of our set of conspiracy-focused websites to understand if each conspiracy theory website category is “core” (regularly utilized on the Internet) or “peripheral”. We utilize centralities across Common Crawl’s dataset rather than our partial one in order to get a sense of each conspiracy theory’s centrality on the entire Internet. While only 446 of our conspiracy theory websites are within the Common Crawl dataset, this analysis allows us to fully understand the relative roles that each conspiracy theory website group in our dataset plays on the wider Internet.","","","","" "Ralph Peeters, Reng Chiz Der, Christian Bizer – University of Mannheim, Germany","WDC Products: A Multi-Dimensional Entity Matching Benchmark","https://arxiv.org/abs/2301.09521","papers","20230101Z00:00:00","","","University of Mannheim, Germany","semantic-web, semantic-web/microformats, e-commerce, linked data, schema.org annotations","The first step of the pipeline is the extraction of large amounts of product offers from the Common Crawl⁴ [⁴https://commoncrawl.org/] using schema.org annotations. Some product offers contain product identifiers like MPNs and GTINs which allow us to group offers into [...] The Web Data Commons6 project regularly extracts schema.org annotations from the Common Crawl, the largest web corpus available to the public, in order to monitor the adoption of semantic annotations on the Web and to provide the extracted data for public download. The WDC Products benchmark uses product offers from the WDC Product Data Corpus V2020 (PDC2020)7. The corpus was created by extracting schema.org product data from the September 2020 version of the Common Crawl. The extracted data goes through a pipeline of cleansing steps such as removing offers from listing pages as well as advertisements that are contained in a page in addition to the main offer [31]. The resulting PDC2020 corpus consists of ∼98 million product offers originating from 603,000 websites.","CC-MAIN-2020-40","","","" "Xavier Amatriain – amatriain.net","Transformer models: an introduction and catalog","https://arxiv.org/abs/2302.07730","papers","20230101Z00:00:00","","","amatriain.net","nlp/language-model, nlp/transformer-language-model, nlp/multi-modal-language-model","","","","","" "Nicholas Carlini, Matthew Jagielski, Christopher A. Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum Anderson, Andreas Terzis, Kurt Thomas, Florian Tramèr – Google; ETH Zurich, Switzerland; NVIDIA; Robust Intelligence","Poisoning Web-Scale Training Datasets is Practical","https://arxiv.org/abs/2302.10149","papers","20230101Z00:00:00","Cryptography and Security (cs.CR), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences","Deep learning models are often trained on distributed, webscale datasets crawled from the internet. In this paper, we introduce two new dataset poisoning attacks that intentionally introduce malicious examples to a model's performance. Our attacks are immediately practical and could, today, poison 10 popular datasets. Our first attack, split-view poisoning, exploits the mutable nature of internet content to ensure a dataset annotator's initial view of the dataset differs from the view downloaded by subsequent clients. By exploiting specific invalid trust assumptions, we show how we could have poisoned 0.01% of the LAION-400M or COYO-700M datasets for just $60 USD. Our second attack, frontrunning poisoning, targets web-scale datasets that periodically snapshot crowd-sourced content -- such as Wikipedia -- where an attacker only needs a time-limited window to inject malicious examples. In light of both attacks, we notify the maintainers of each affected dataset and recommended several low-overhead defenses.","Google; ETH Zurich, Switzerland; NVIDIA; Robust Intelligence","nlp/corpus-construction, computer-security, nlp/language-model, nlp/transformer-language-model, nlp/multi-modal-language-model","B.3 Common Crawl Common Crawl is a petabyte-scale corpus of web crawl data that is repeatedly captured on a roughly monthly basis. Each archive is a complete re-crawl of the internet that records the full activity, including all requests of the crawler and the host responses—with both HTTP headers and content. As such, each archive contains a static snapshot of all crawled pages at the time of visit. This may include new page content not seen during a previous crawl, and may exclude content that has become stale since the previous crawl. For example, data crawled during September 24 through October 8, 2022 contains 3.15 billion web pages with 380 TiB of uncompressed content from 34 million registered domains—1.3 billion URLs were not visited in any of the prior crawls.¹⁴ The Common Crawl dataset is vulnerable to an attack which is similar to both our frontrunning and split-view poisoning attacks. The adversary can purchase an expired domain which was previously contained in the Common Crawl, and it will be re-crawled with the adversary’s choice of content, which will then appear in subsequent Common Crawl snap- shots. Notice that, differently from the snapshot-poisoning attack on Wikipedia, there is no content moderation here and so the adversary simply needs to continue to control the domain to poison all future Common Crawl snapshots. Buying recently-expired domains that existed in previous Common Crawl snapshots allows a stronger form of attack where the attack can inject entirely new links into the crawl. This can be accomplished by adding links or subdomains to poisoned domains, and allowing the crawler to discover the new poisoned domains. Thus, an adversary may inject arbitrarily many pages into the Common Crawl dataset, not only from the originally expired subset. We do not implement this attack following our ethics statements outlined earlier. Since Common Crawl WARC files have been hosted by Amazon on a AWS Athena (serverless service)¹⁵, domain reconnaissance work to analyze URLs is inexpensive. Scanning through 10 years of Common Crawl data to analyze domains from popular TLDs and high number of Common Crawl entries cost us USD$ 0.84. While additional analysis might somewhat increase this cost, it remains an inexpensive way to search for vulnerable domains. Buying recently expired domains, or domains that have a dangling DNS record with an active IP address is preferred, as domains that failed to return a 200-OK status in consecutive crawls seem to be moved to a lower priority. For example, among expired domains we purchased, just one domain accounts for more than 90% of all status codes among the purchased domains, while other domains we purchased as early as 12/20/2020 have seen relatively less scraping traffic across a 3 year period.16 Because Common Crawl is enormous and uncurated (to accurately reflect the content of the internet) poisoning all of Common Crawl is impractical due to size. Additionally, it is not always apparent how consumers of this data are process- ing it for downstream machine learning tasks. However, there exist many derivative datasets which are constructed by curating a relevant subset of the Common Crawl. This includes the LAION-5B image dataset [57], the text dataset known as the Pile [23], the multilingual text dataset CC-100 [78], and the CCMatrix dataset [61], a translation dataset of pairs of translated sentences. Such curation actually amplifies the power of an attack: an attack which adds 1MB of text to the Common Crawl would be poisoning a 2.5 · 10−9 fraction of the Common Crawl, but if this text bypasses the curation done for the CC-100 dataset, it could instead poison a 1.2 · 10−5 fraction of the English corpus, or even a full 9.1% of the Oromo corpus.","","","","" "Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, Furu Wei – Microsoft","Language Is Not All You Need: Aligning Perception with Language Models","https://arxiv.org/abs/2302.14045","papers","20230101Z00:00:00","","","Microsoft","nlp/language-model, nlp/transformer-language-model, nlp/multi-modal-language-model","Text Corpora We train our model with The Pile [GBB+20] and Common Crawl (CC). The Pile is a massive English text dataset built for training large-scale language models, which is produced from a variety of data sources. We exclude data splits from GitHub, arXiv, Stack Exchange, and PubMed Central. We also include the Common Crawl snapshots (2020-50 and 2021-04) datasets, CC-Stories, and RealNews datasets [SPP+19 , SPN+22]. The entire datasets have been purged of duplicate and near-duplicate documents, as well as filtered to exclude downstream task data. Refer to Appendix B.1.1 for detailed descriptions of training text corpora.¶ Image-Caption Pairs The image-caption pairs are constructed from several datasets, including English LAION-2B [ SBV+22 ], LAION-400M [ SVB+21], COYO-700M [BPK+22 ], and Conceptual Captions [ SDGS18, CSDS21]. English LAION-2B, LAION-400M, and COYO-700M are collected from web pages of the Common Crawl web data by extracting image sources and the corresponding alt-text. Conceptual Captions are also from internet web pages. More details can be found in Appendix B.1.2. ¶ Interleaved Image-Text Data We collect interleaved multimodal data from the Common Crawl snapshot, which is a publicly available archive of web pages. We use a filtering process to select about 71M web pages from the original 2B web pages in the snapshot. We then extract the text and images from the HTML of each selected web page. For each document, we limit the number of images to five to reduce noise and redundancy. We also randomly discard half of the documents that only have one image to increase the diversity. We provide more details about the data collection process in Appendix B.1.3. By using this corpus, we enable KOSMOS-1 to handle interleaved text and image and improve its few-shot ability.","CC-MAIN-2020-50, CC-MAIN-2021-04","","The-Pile-English, CC-Stories, RealNews, LAION-400M, LAION-2B, COYO-700M","" "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample – Meta AI","LLaMA: Open and Efficient Foundation Language Models","https://arxiv.org/abs/2302.13971","papers","20230101Z00:00:00","Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences","We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.","Meta AI","nlp/language-model, nlp/transformer-language-model, nlp/multi-modal-language-model","English CommonCrawl [67%]. We preprocess five CommonCrawl dumps, ranging from 2017 to 2020, with the CCNet pipeline (Wenzek et al., 2020). This process deduplicates the data at the line level, performs language identification with a fastText linear classifier to remove non-English pages and filters low quality content with an n-gram language model. In addition, we trained a linear model to classify pages used as references in Wikipedia v.s. randomly sampled pages, and discarded pages not classified as references.","five CommonCrawl dumps, ranging from 2017 to 2020","Tensorflow-C4","","" "Khaled Ammar – University of Waterloo, Ontario, Canada","Systems and Algorithms for Dynamic Graph Processing","https://uwspace.uwaterloo.ca/bitstream/handle/10012/19195/Ammar_Khaled.pdf","papers","20230101Z00:00:00","","","University of Waterloo, Ontario, Canada","graph-processing, web-science/hyperlinkgraph","Common Crawl experiments. Sixteen machines load 64 billion edges, index them, and track motifs in 20 batches of 10K random edge changes.","","","WDC-hyperlinkgraph, WDC-hyperlinkgraph (2014)","" -"Saffron Huang, Divya Siddarth – Collective Intelligence Project (cip.org)","Generative AI and the Digital Commons","https://arxiv.org/pdf/2303.11074.pdf","papers","20230101Z00:00:00","","","Collective Intelligence Project (cip.org)","digital-commons, public-commons, nlp/corpus-construction, nlp/language-models, nlp/generative-language-models, cc-cited-not-used","","","","","" +"Saffron Huang, Divya Siddarth – Collective Intelligence Project (cip.org)","Generative AI and the Digital Commons","https://arxiv.org/pdf/2303.11074.pdf","papers","20230101Z00:00:00","","","Collective Intelligence Project (cip.org)","digital-commons, public-commons, nlp/corpus-construction, nlp/language-models, nlp/generative-language-models, cc-cited-not-used","GFMs are trained on the digital commons. Generative foundation models leverage large databases of scraped information (text, code, images) from the internet to train highly capable models. This depends on the availability of public, scrapable data and leverages the “collective intelligence” of humanity, including the painstakingly edited Wikipedia, millennia’s worth of books, billions of Reddit comments, hundreds of terabytes’ worth of images, and more³ [³LAION-5B, which Stable Diffusion is trained on, has 5 billion text-image pairs (Schuhmann et al., 2022).The Pile has 100+GB of books (Gao et al., 2020)]. They also rely on non- profits like Common Crawl (which build and maintain open repositories of web crawl data), Creative Commons (for open licenses for the data used), open source libraries, and other digital infrastructure. They also take advantage of aggregated user preferences; e.g. the WebText dataset underlying the GPT family of models uses Reddit “karma scores” to select content for inclusion. All of this is common digital information and infrastructure that many people contribute to.","","","","" "Alan Chan, Herbie Bradley, Nitarshan Rajkumar – University of Cambridge, United Kingdom; Mila, Université de Montréal, Canada; EleutherAI","Reclaiming the Digital Commons: A Public Data Trust for Training Data","https://arxiv.org/abs/2303.09001","papers","20230101Z00:00:00","","Democratization of AI means not only that people can freely use AI, but also that people can collectively decide how AI is to be used. In particular, collective decision-making power is required to redress the negative externalities from the development of increasingly advanced AI systems, including degradation of the digital commons and unemployment from automation. The rapid pace of AI development and deployment currently leaves little room for this power. Monopolized in the hands of private corporations, the development of the most capable foundation models has proceeded largely without public input. There is currently no implemented mechanism for ensuring that the economic value generated by such models is redistributed to account for their negative externalities. The citizens that have generated the data necessary to train models do not have input on how their data are to be used. In this work, we propose that a public data trust assert control over training data for foundation models. In particular, this trust should scrape the internet as a digital commons, to license to commercial model developers for a percentage cut of revenues from deployment. First, we argue in detail for the existence of such a trust. We also discuss feasibility and potential risks. Second, we detail a number of ways for a data trust to incentivize model developers to use training data only from the trust. We propose a mix of verification mechanisms, potential regulatory action, and positive incentives. We conclude by highlighting other potential benefits of our proposed data trust and connecting our work to ongoing efforts in data and compute governance.","University of Cambridge, United Kingdom; Mila, Université de Montréal, Canada; EleutherAI","digital-commons, public-commons, nlp/corpus-construction, nlp/language-models, nlp/generative-language-models, cc-cited-not-used","The data trust could also start from existing efforts, such as the Common Crawl.","","","","" "Michał Turski, Tomasz Stanisławek, Karol Kaczmarek, Paweł Dyda, Filip Graliński – Snowflake; Adam Mickiewicz University, Poznań, Poland","CCpdf: Building a High Quality Corpus for Visually Rich Documents from Web Crawl Data","https://arxiv.org/pdf/2304.14953.pdf","papers","20230101Z00:00:00","","In recent years, the field of document understanding has progressed a lot. A significant part of this progress has been possible thanks to the use of language models pretrained on large amounts of documents. However, pretraining corpora used in the domain of document understanding are single domain, monolingual, or nonpublic. Our goal in this paper is to propose an efficient pipeline for creating a big-scale, diverse, multilingual corpus of PDF files from all over the Internet using Common Crawl, as PDF files are the most canonical types of documents as considered in document understanding. We analysed extensively all of the steps of the pipeline and proposed a solution which is a trade-off between data quality and processing time. We also share a CCpdf corpus in a form or an index of PDF files along with a script for downloading them, which produces a collection useful for language model pretraining. The dataset and tools published with this paper offer researchers the opportunity to develop even better multilingual language models.","Snowflake; Adam Mickiewicz University, Poznań, Poland","nlp/language-models, nlp/corpus-construction, document understanding, PDF","As our input we used web indexes created by Common Crawl. [...] They crawl webpages and save them into crawls dumps. A crawl dump contains billions of webpages (hundreds of terabytes of uncompressed data) and a new dump has been published nearly every month since March 2014. Some earlier, more irregular dumps starting from 2008 are also available.¹¹ Each dump also contains an index of the crawled pages. We decided to simply use the latest (and the largest) dump available at the time of writing this paper — the May 2022 dump.¹² [¹²https://commoncrawl.org/2022/06/may-2022-crawl-archive-now-available/] It contains 3.45 billion web pages, which amounts to 462 TB of uncompressed content. It would obviously be possible to apply the extraction procedure described in this paper to all crawls to obtain an even larger collection of PDFs, which would also allow for a diachronic analysis, but we wanted to focus on the most recent documents. Note that dumps contain only files considered as text files by the Common Crawl web robot. Mostly these are web pages in the HTML format, but, fortunately, PDFs are also treated as text files, being derivative of the PostScript page description language. This is not the case with, for instance, images, Excel files, DOCX files. Consequently, such files cannot be amassed using the methods described in the aforementioned papers.¶ 3.2 PDF links extraction¶ We experimented with two methods for extracting links to PDF files (step 1 in Figure 1):¶ 1. using CDX files, i.e., index server files provided by Common Crawl;¶ 2. looking for links to PDF files in WARC, i.e., raw crawl data files.¶ The first method is simpler, as CDX files are easy to download and take up only 225 GB in total. The second method might yield more links to PDF files, but:¶ – it is impossible for us to download all WARCs. Only a limited number of them can be processed, though still a significant number of PDF links can be added even if a small percentage of all WARC files are processed,¶ – there is lower probability that the file linked is available at all, be it in the crawl dump or simply at the original address.¶ In CDX files, the MIME type of a captured file is specified, and we limited ourselves to the application/pdf type.¶ Hence, in this paper, we focus on the first method, which allows to speed up the whole processing pipeline.","CC-MAIN-2022-21 (CDX)","","","" "Sadia Nourin, Van Tran, Xi Jiang, Kevin Bock, Nick Feamster, Nguyen Phong Hoang, Dave Levin – University of Maryland, USA; University of Chicago, USA","Measuring and Evading Turkmenistan’s Internet Censorship: A Case Study in Large-Scale Measurements of a Low-Penetration Country","https://doi.org/10.1145/3543507.3583189","papers","20230101Z00:00:00","Censorship Measurement, Web Filtering, Turkmenistan","Since 2006, Turkmenistan has been listed as one of the few Internet enemies by Reporters without Borders due to its extensively censored Internet and strictly regulated information control policies. Existing reports of filtering in Turkmenistan rely on a handful of vantage points or test a small number of websites. Yet, the country’s poor Internet adoption rates and small population can make more comprehensive measurement challenging. With a population of only six million people and an Internet penetration rate of only 38%, it is challenging to either recruit in-country volunteers or obtain vantage points to conduct remote network measurements at scale. We present the largest measurement study to date of Turkmenistan’s Web censorship. To do so, we developed TMC, which tests the blocking status of millions of domains across the three foundational protocols of the Web (DNS, HTTP, and HTTPS). Importantly, TMC does not require access to vantage points in the country. We apply TMC to 15.5M domains, our results reveal that Turkmenistan censors more than 122K domains, using different blocklists for each protocol. We also reverse-engineer these censored domains, identifying 6K over-blocking rules causing incidental filtering of more than 5.4M domains. Finally, we use , an open-source censorship evasion tool, to discover five new censorship evasion strategies that can defeat Turkmenistan’s censorship at both transport and application layers. We will publicly release both the data collected by TMC and the code for censorship evasion.","University of Maryland, USA; University of Chicago, USA","web-filtering, internet-censorship","[...] the payload of our probes contains domains curated from the Citizen Lab lists [5], the full Tranco list [42], and Common Crawl Project [8]. Due to limited resources of our VPS, we opt to probe the frst 10M FQDNs ranked by the Common Crawl Project instead of the full list of almost 400M FQDNs. [...] We scanned all regular expressions that TMC discovered against all FQDNs that we could obtain from DNS zone fles provided via ICANN’s Centralized Zone Data Service [ 6] and the full host list from the Common Crawl Project [8], totaling 718M FQDNs.","hyperlinkgraph","","","" @@ -355,7 +398,7 @@ "Stacey Taylor, Vlado Keselj – Dalhousie University","Don’t Worry Accountants, ChatGPT Won’t Be Taking Your Job... Yet","https://web.cs.dal.ca/~vlado/papers/cai23s.pdf","papers","20230101Z00:00:00","","ChatGPT has demonstrated the ability to generate plausible human-like text and research is underway to evaluate and benchmark its current performance in various do- mains. The research we present here provides a preliminary benchmark on ChatGPT’s ability to emulate the style and information presented in financial statement note disclo- sures. Using text from Canada’s major banks (n = 5) over the period of 2019–2021, we query ChatGPT to generate two required note disclosures and compare its text against the note disclosures written by the banks in their corporate annual reports. We find that the similarity between ChatGPT’s text and the human-authored text is very low, but also find that ChatGPT’s text is significantly more readable for one of the two disclosures (p < 0.05).","Dalhousie University","ChatGPT, Machine Learning, Financial Statements, Similarity, Stylometry, Readability","Finally, ChatGPT was trained on the common crawl web corpora which consists of 12 years of common crawl data [30 [T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. “Language models are few-shot learners”. In: Advances in neural information processing systems 33 (2020), pp. 1877–1901.]]. That means that for each of the 5 banks, there are only 12 annual reports that ChatGPT has seen. This could have a material effect on the outcome of its generation.","","","","" "Kyle Steinfeld – University of California, Berkeley, CA, USA","Clever little tricks: A socio-technical history of text-to-image generative models","https://doi.org/10.1177/14780771231168230","papers","20230101Z00:00:00","","The emergence of text-to-image generative models (e.g., Midjourney, DALL-E 2, Stable Diffusion) in the summer of 2022 impacted architectural visual culture suddenly, severely, and seemingly out of nowhere. To contextualize this phenomenon, this text offers a socio-technical history of text-to-image generative systems. Three moments in time, or “scenes,” are presented here: the first at the advent of AI in the middle of the last century; the second at the “reawakening” of a specific approach to machine learning at the turn of this century; the third that documents a rapid sequence of innovations, dubbed “clever little tricks,” that occurred across just 18 months. This final scene is the crux, and represents the first formal documentation of the recent history of a specific set of informal innovations. These innovations were produced by non-affiliated researchers and communities of creative contributors, and directly led to the technologies that so compellingly captured the architectural imagination in the summer of 2022. Across these scenes, we examine the technologies, application domains, infrastructures, social contexts, and practices that drive technical research and shape creative practice in this space.","University of California, Berkeley, CA, USA","ai/text-to-image-models, ai/generative-models, architecture, architectural visual culture","The LAION-400 dataset consists of 400 million image-caption pairs extracted from random selections of web pages from a web scrape that captured sites between 2014 and 2021 that was conducted by Common Crawl (a separate non- profit established in 2011 “with the goal of democratizing access to web information by producing and maintaining an open repository of web crawl data”).⁷⁵ [⁷⁵ Gil E. About common crawl. 2011. https://commoncrawl.org/about/ (accessed 04 December 2022).] Although it specifically was “not meant for any real-world production or application,”⁷⁶ [⁷⁶ Schuhmann C. LAION-400-Million open dataset. December 12, 2022. https://laion.ai/blog/laion-400-open-dataset (accessed 04 December 2022).] this dataset was used by Google to train its text-to-image generative model “Imagen” in 2022.⁷⁷ [⁷⁷ Saharia C, Chan W, Saxena S, et al. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. Epub ahead of print May 2022. DOI: 10.48550/arXiv.2205.11487]","","","","" "Zahra Moti, Asuman Senol, Hamid Bostani, Frederik Zuiderveen Borgesius, Veelasha Moonsamy, Arunesh Mathur, Gunes Acar – Radboud University, Netherlands; imec-COSIC, KU Leuven, Belgium; Ruhr University Bochum, Germany","Targeted and Troublesome: Tracking and Advertising on Children's Websites","https://arxiv.org/pdf/2308.04887.pdf","papers","20230101Z00:00:00","","On the modern web, trackers and advertisers frequently construct and monetize users' detailed behavioral profiles without consent. Despite various studies on web tracking mechanisms and advertisements, there has been no rigorous study focusing on websites targeted at children. To address this gap, we present a measurement of tracking and (targeted) advertising on websites directed at children. Motivated by lacking a comprehensive list of child-directed (i.e., targeted at children) websites, we first build a multilingual classifier based on web page titles and descriptions. Applying this classifier to over two million pages, we compile a list of two thousand child-directed websites. Crawling these sites from five vantage points, we measure the prevalence of trackers, fingerprinting scripts, and advertisements. Our crawler detects ads displayed on child-directed websites and determines if ad targeting is enabled by scraping ad disclosure pages whenever available. Our results show that around 90% of child-directed websites embed one or more trackers, and about 27% contain targeted advertisements--a practice that should require verifiable parental consent. Next, we identify improper ads on child-directed websites by developing an ML pipeline that processes both images and text extracted from ads. The pipeline allows us to run semantic similarity queries for arbitrary search terms, revealing ads that promote services related to dating, weight loss, and mental health; as well as ads for sex toys and flirting chat services. Some of these ads feature repulsive and sexually explicit imagery. In summary, our findings indicate a trend of non-compliance with privacy regulations and troubling ad safety practices among many advertisers and child-directed websites. To protect children and create a safer online environment, regulators and stakeholders must adopt and enforce more stringent measures.","Radboud University, Netherlands; imec-COSIC, KU Leuven, Belgium; Ruhr University Bochum, Germany","web-science/tracking, web-science/advertisements, computer-security/internet-security","Applying the classifier to the Common Crawl dataset [32], we compiled a list of 2K manually verified child-directed websites. [...] Our preliminary analysis of over 500K web pages from the most popular one million websites in the Common Crawl dataset [32] showed that more than 97% of the websites have a title, 63% of the websites include a description, and 24% contain a keywords meta tag. [...] Applying this method to the WAT metadata files from the June-July 2022 Common Crawl snapshot [32], we extracted the titles and descriptions, limiting ourselves to the top million websites in the Tranco [26] or the CrUX [82] list. [...] [32] “June/July 2022 crawl archive now available – Common Crawl,” https://commoncrawl.org/2022/07/june-july-2022-crawl-archive-now-available, 2023, [Online; accessed 28. Feb. 2023].","","","","" -"Juhani Luotolahti, Jenna Kanerva, Jouni Luoma, Valtteri Skantsi, Sampo Pyysalo, Filip Ginter Veronika Laippala – University of Turku, Finland; University of Oulu, Finland","Finnish Internet Parsebank","https://www.researchsquare.com/article/rs-3138153/v1","papers","20230101Z00:00:00","","We present a Finnish web corpus with multiple text sources and rich additional annotations. The corpus is based in large parts on a dedicated Internet crawl, supplementing data from the Common Crawl initiative and the Finnish Wikipedia. The size of the corpus is 6.2 billion tokens from 9.5 million source documents. The text is enriched with morphological analyses, word lemmas, dependency trees, named entities and text register (genre) identification. Paragraph-level scores of an n-gram language model, as well as paragraph duplication rate in each document are provided, allowing for further filtering of the dataset by the end user. Thanks to changes in the 2023 Finnish copyright legislation, the corpus is openly available for research purposes, and can also be accessed through the NoSketchEngine concordance tool and the dep search dependency tree query tool, all at https://turkunlp.org/finnish nlp.html.","University of Turku, Finland; University of Oulu, Finland","nlp/corpus-construction, language-specific corpus, web-as-corpus, nlp/dependency-tree-bank, Finnish","3.1 Data sources ¶ Our corpus is based on three primary data sources: Finnish Wikipedia, Common Crawl, and a custom web-crawl. [...] The Common Crawl dataset includes both plain text and raw HTML files, at the time without language metadata. We employed a language detection step using CLD3 as the language detector and MapReduce to download only the Finnish-language plaintext from the Amazon cloud service that hosts Common Crawl. As shown in Table2, this resulted in only a moderate amount of new data (3.2GB deduplicated text) ontop of Wikipedia (1.5GB deduplicated text). ¶ Consequently, we conducted a dedicated web crawl using the SpiderLing webcrawler (Suchomel & Pomikálek,2012). This web crawler is specifically designed forcollecting monolingual plaintext web corpora. It comprises a web crawling engine, atrigram-based language detector, and a boilerplate remover called Justext, which isresponsible for extracting plain text. Moreover, the crawler is lightweight and easyto run. The crawl was seeded with the list of all domain names in the.fi top-level domain, as well as the URLs of all Finnish text pages we gathered from CommonCrawl in the previous step. The crawl was carried out between 2014 and 2016. ¶ The final sizes of text obtained from the three sources are summarized in Table2, which shows that the dedicated webcrawl constitutes by far the largest portion of the final corpus. Note that in the newer versions of Common Crawl, a considerably stronger emphasis is placed on multilingual coverage, and the benefit of a dedicated webcrawl might be smaller but very unlikely to vanish entirely.","","","","" +"Juhani Luotolahti, Jenna Kanerva, Jouni Luoma, Valtteri Skantsi, Sampo Pyysalo, Veronika Laippala, Filip Ginter – University of Turku, Finland; University of Oulu, Finland","Finnish Internet Parsebank","https://www.researchsquare.com/article/rs-3138153/v1","papers","20230101Z00:00:00","","We present a Finnish web corpus with multiple text sources and rich additional annotations. The corpus is based in large parts on a dedicated Internet crawl, supplementing data from the Common Crawl initiative and the Finnish Wikipedia. The size of the corpus is 6.2 billion tokens from 9.5 million source documents. The text is enriched with morphological analyses, word lemmas, dependency trees, named entities and text register (genre) identification. Paragraph-level scores of an n-gram language model, as well as paragraph duplication rate in each document are provided, allowing for further filtering of the dataset by the end user. Thanks to changes in the 2023 Finnish copyright legislation, the corpus is openly available for research purposes, and can also be accessed through the NoSketchEngine concordance tool and the dep search dependency tree query tool, all at https://turkunlp.org/finnish nlp.html.","University of Turku, Finland; University of Oulu, Finland","nlp/corpus-construction, language-specific corpus, web-as-corpus, nlp/dependency-tree-bank, Finnish","3.1 Data sources ¶ Our corpus is based on three primary data sources: Finnish Wikipedia, Common Crawl, and a custom web-crawl. [...] The Common Crawl dataset includes both plain text and raw HTML files, at the time without language metadata. We employed a language detection step using CLD3 as the language detector and MapReduce to download only the Finnish-language plaintext from the Amazon cloud service that hosts Common Crawl. As shown in Table2, this resulted in only a moderate amount of new data (3.2GB deduplicated text) ontop of Wikipedia (1.5GB deduplicated text). ¶ Consequently, we conducted a dedicated web crawl using the SpiderLing webcrawler (Suchomel & Pomikálek,2012). This web crawler is specifically designed forcollecting monolingual plaintext web corpora. It comprises a web crawling engine, atrigram-based language detector, and a boilerplate remover called Justext, which isresponsible for extracting plain text. Moreover, the crawler is lightweight and easyto run. The crawl was seeded with the list of all domain names in the.fi top-level domain, as well as the URLs of all Finnish text pages we gathered from CommonCrawl in the previous step. The crawl was carried out between 2014 and 2016. ¶ The final sizes of text obtained from the three sources are summarized in Table2, which shows that the dedicated webcrawl constitutes by far the largest portion of the final corpus. Note that in the newer versions of Common Crawl, a considerably stronger emphasis is placed on multilingual coverage, and the benefit of a dedicated webcrawl might be smaller but very unlikely to vanish entirely.","","","","" "R. Tenis – ","Modelling an Efficient URL Phishing Detection Approach Based on a Dense Network Model. Computer Systems Science & Engineering . 2023, Vol. 47 Issue 2, p2625-2641. 17p.","https://web.s.ebscohost.com/abstract?direct=true&profile=ehost&scope=site&authtype=crawler&jrnl=02676192&AN=169779920&h=WGjAKpK7ACB1ZcUfp8Ikhm9IcDPjsbjptgyhA5ityW47Z2oYK4JmZTEMhj6t1UhLOFgbraBWyMgS1NID6mz%2bcA%3d%3d&crl=c&resultNs=AdminWebAuth&resultLocal=ErrCrlNotAuth&crlhashurl=login.aspx%3fdirect%3dtrue%26profile%3dehost%26scope%3dsite%26authtype%3dcrawler%26jrnl%3d02676192%26AN%3d169779920","papers","20230101Z00:00:00","","The social engineering cyber-attack is where culprits mislead the users by getting the login details which provides the information to the evil server called phishing. The deep learning approaches and the machine learning are compared in the proposed system for presenting the methodology that can detect phishing websites via Uniform Resource Locator (URLs) analysis. The legal class is composed of the home pages with no inclusion of login forms in most of the present modern solutions, which deals with the detection of phishing. Contrarily, the URLs in both classes from the login page due, considering the representation of a real case scenario and the demonstration for obtaining the rate of false-positive with the existing approaches during the legal login pages provides the test having URLs. In addition, some model reduces the accuracy rather than training the base model and testing the latest URLs. In addition, a feature analysis is performed on the present phishing domains to identify various approaches to using the phishers in the campaign. A new dataset called the MUPD dataset is used for evaluation. Lastly, a prediction model, the Dense forward-backwards Long Short Term Memory (LSTM) model (d - FBLSTM), is presented for combining the forward and backward propagation of LSMT to obtain the accuracy of 98.5% on the initiated login URL dataset.","","computer-security/internet-security, web-security","The PhishTank provides the URLs for phishing to be gathered, and the Common Crawl provides the legal URLs.","","","","" "Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari Morcos – FAIR, Meta AI","D4: Improving LLM Pretraining via Document De-Duplication and Diversification","https://dmlr.ai/assets/accepted-papers/131/CameraReady/LLM_Data_Pruning_Paper_Camera_Ready.pdf","papers","20230101Z00:00:00","","","FAIR, Meta AI","nlp/large-language-models, nlp/corpus-construction, deduplication","We perform all of our training runs on a version of CommonCrawl pre-processed with a CCNet (Wenzek et al., 2019) pipeline identical to the one used by Touvron et al. (2023). We add an additional step of MinHash-based de-duplication (see more details in Section A.1). Applying this common step before our experiments guarantees that any effects observed in our experiments complement the currently prevalent approach of MinHash-based data de-duplication strategies. Throughout the rest of this work, we refer to this dataset as CC-dedup. [...] A.1.2. DATASET CURATION DETAILS In this subsection, we describe how we curate CC-dedup, the starting source dataset used throughout the paper. We start with 5 CommonCrawl dumps³ [³https://commoncrawl.org/the-data/get-started/] which range from 2017 to 2020. We then use CC-net (Wenzek et al., 2019), to de-duplicate data at the paragraph level, remove non-English web pages, and filter out low-quality pages. The pipeline we use is identical to the pipeline used in Touvron et al. (2023) (see the section after the subtitle ”English CommonCrawl [67%]”, within Section 2). On top of this, we add an additional step of MinHash (Broder, 1997) de-duplication at the document-level. The parameters for MinHash are 20 hashes per signature, 20 buckets, and 1 row per bucket. These parameters are the default parameters in the spark implementation of MinHashLSH, and we did not do a hyperparameter sweep on these parameters due to compute limitations. Previous work has attempted running MinHash with much more aggressive parameters: Lee et al. (2021) and Penedo et al. use 20 buckets, 450 hashes per bucket, and 9000 signatures per hash. We conjecture that more aggressive MinHash would remove more templates, resulting in a higher-quality starting dataset, potentially making the SemDeDup step of D4 less necessary. Abbas et al. (2023) did find that the performance of MinHash from Lee et al. (2021) and SemDeDup are comparable at a fixed data selection ratio of 3.9% on C4, indicating that SemDeDup filters out similar data to aggressive MinHash does. We leave sweeping over these hyperparameters as future work. We note that since our dataset is curated from CommonCrawl dumps, there is risk that our training set contains offensive or PII content. We note, however, that this risk is no more than that of standard language modeling curation such as Touvron et al. (2023), since we use the same pipeline to filter CommonCrawl dumps.","","","","" "Liang Wang, Hyojoon Kim, Prateek Mittal, Jennifer Rexford – Princeton University, USA","RAVEN: Stateless Rapid IP Address Variation for Enterprise Networks.","https://petsymposium.org/2023/files/papers/issue3/popets-2023-0077.pdf","papers","20230101Z00:00:00","privacy, traffic analysis, programmable data plane, P4, QUIC","Enterprise networks face increasing threats against the privacy of their clients. Existing enterprise services like Network Address Translation (NAT) offer limited privacy protection, at the cost of requiring per-flow state. In this paper, we introduce RAVEN (Rapid Address Variation for Enterprise Networks), a network-based privacy solution that is complementary to application-layer defenses. RAVEN protects privacy by frequently changing the client’s public IP address. With RAVEN, a client is not limited to using a single IP address at a given time, or even for a given connection. RAVEN goes further, breaking the association between packets that belong to the same connection by frequently changing the client’s IP address within a single connection. RAVEN achieves this through a novel division of labor: the client uses a transport protocol, like QUIC, that supports seamless connection migration, and decides when to switch its IP address, while the enterprise network actually changes the client’s IP address in a stateless manner at line rate and ensures end-to-end packet delivery. We implement RAVEN using QUIC and off-the-shelf programmable switches. We deploy RAVEN in a test IPv6 network and evaluate its defense against webpage fingerprinting attacks. Even with a strong adversary, the average precision of the best adaptive attacks drops from 0.96 to 0.84, with a 0.5% degradation in client throughput. When RAVEN changes IP addresses at unpredictable frequency, the precision of the best attacks falls to 0.78—the same effectiveness as WTF-PAD.","Princeton University, USA","computer-security/internet-security, privacy, internet traffic analysis","Webpages to fingerprint. To find webpages on GitHub Pages, we search the Common Crawl database [59] (Jan 2022 release) to extract URLs whose domain names end with “*.github.io”. From about 0.8 M URLs, we sampled 100 URLs as monitored webpages and 10,000 URLs as unmonitored. [...] [⁵⁹] The Common Crawl team. 2022. The Common Crawl Dataset. https://commoncrawl.org/.","CC-MAIN-2022-05","","","" @@ -364,7 +407,7 @@ "Hynek Kydlíček – Univerzita Karlova, Czech Republic","Implicit information extraction from news stories","http://hdl.handle.net/20.500.11956/183054","papers","20230101Z00:00:00","","","Univerzita Karlova, Czech Republic","nlp/corpus-construction, nlp/text-classification, ir/information-extraction, news-classification","We used Common Crawl² [²https://commoncrawl.org/] as a data source, as crawling live websites would be infeasible. For extraction, we developed a custom tool C’monCrawl³ [³https://github.com/hynky1999/CmonCrawl], which allows end-to-end extraction of Common Crawl data. We then deployed it in distributed setting on Artificial Intelligence Cluster (AIC)⁴ [⁴https://aic.ufal.mff.cuni.cz/], processed 49.2M URLs and extracted 3.2M articles.","","","","" "Matyas Bohacek, Michal Bravansky, Filip Trhlík, Václav Moravec – Faculty of Social Sciences, Charles University, Prague, Czech Republic; Gymnasium of Johannes Kepler, Prague, Czech Republic; University College London, United Kingdom","Czech-ing the News: Article Trustworthiness Dataset for Czech","https://aclanthology.org/2023.wassa-1.10/","papers","20230101Z00:00:00","","We present the Verifee dataset: a multimodal dataset of news articles with fine-grained trustworthiness annotations. We bring a diverse set of researchers from social, media, and computer sciences aboard to study this interdisciplinary problem holistically and develop a detailed methodology that assesses the texts through the lens of editorial transparency, journalist conventions, and objective reporting while penalizing manipulative techniques. We collect over 10,000 annotated articles from 60 Czech online news sources. Each item is categorized into one of the 4 proposed classes on the credibility spectrum {--} ranging from entirely trustworthy articles to deceptive ones {--} and annotated of its manipulative attributes. We fine-tune prominent sequence-to-sequence language models for the trustworthiness classification task on our dataset and report the best F-1 score of 0.53. We open-source the dataset, annotation methodology, and annotators{'} instructions in full length at https://www.verifee.ai/research/ to enable easy build-up work.","Faculty of Social Sciences, Charles University, Prague, Czech Republic; Gymnasium of Johannes Kepler, Prague, Czech Republic; University College London, United Kingdom","nlp/corpus-construction, nlp/fake-news-detection, news-classification","Initially, we assembled nearly 94, 000 articles by scraping URLs of 60 Czech news sources² obtained from Common Crawl³. These sources included mainstream journalistic websites, tabloids, independent news outlets, and websites that are part of the disinformation ecosystem (Štˇetka et al., 2021), capturing the full scope of journalistic content in the Czech Republic. [...] We applied multiple filters and balancing mechanisms based on text length and topics to mitigate deficiencies caused by inherent flaws in Common Crawl, which reduced the dataset’s size from 94, 000 to 10, 197 items. This way, we also ensured that the data is as representative of the Czech news ecosystem and as diverse as possible.","","","","" "Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jingyuan Wang, Jian-Yun Nie, Ji-Rong Wen – Gaoling School of Artificial Intelligence, Renmin University of China, China; School of Information, Renmin University of China, China; DIRO, Université de Montréal, Canada; School of Computer Science and Engineering, Beihang University, China; Beijing Key Laboratory of Big Data Management and Analysis Methods, China","The Web Can Be Your Oyster for Improving Language Models","https://aclanthology.org/2023.findings-acl.46.pdf","papers","20230101Z00:00:00","","Pretrained language models (PLMs) encode a large amount of world knowledge. However, as such knowledge is frozen at the time of model training, the models become static and limited by the training data at that time. In order to further improve the capacity of PLMs for knowledge-intensive tasks, we consider augmenting PLMs with the large-scale web using search engine. Unlike previous augmentation sources (e.g., Wikipedia data dump), the web provides broader, more comprehensive and constantly updated information. In this paper, we present a web-augmented PLM – UniWeb, which is trained over 16 knowledge-intensive tasks in a unified text-to-text format. Instead of simply using the retrieved contents from web, our approach has made two major improvements. Firstly, we propose an adaptive search engine assisted learning method that can self-evaluate the confidence level of PLM’s predictions, and adaptively determine when to refer to the web for more data, which can avoid useless or noisy augmentation from web. Secondly, we design a pretraining task, i.e., continual knowledge learning, based on salient spans prediction, to reduce the discrepancy between the encoded and retrieved knowledge. Experiments on a wide range of knowledge-intensive tasks show that our model significantly outperforms previous retrieval-augmented methods.","Gaoling School of Artificial Intelligence, Renmin University of China, China; School of Information, Renmin University of China, China; DIRO, Université de Montréal, Canada; School of Computer Science and Engineering, Beihang University, China; Beijing Key Laboratory of Big Data Management and Analysis Methods, China","nlp/large-language-models,","[...] we select the CCNet snapshot corresponding to the August 2019 Common Crawl snapshot which covers a wide range of 134M web documents and finally yields 906M passages of 100 tokens. CCNet processes Common Crawl through deduplication, language identification and quality filtering based on perplexity calculated by a lan- guage model.","","","CCNet","" -"Thuat Nguyen, Chien Van Nguyen, Viet Dac Lai, Hieu Man, Nghia Trung Ngo, Franck Dernoncourt, Ryan A. Rossi, Thien Huu Nguyen – University of Oregon, USA; Adobe Research, USA","CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages","https://arxiv.org/pdf/2309.09400.pdf","papers","20230101Z00:00:00","","The driving factors behind the development of large language models (LLMs) with impressive learning capabilities are their colossal model sizes and extensive training datasets. Along with the progress in natural language processing, LLMs have been frequently made accessible to the public to foster deeper investigation and applications. However, when it comes to training datasets for these LLMs, especially the recent state-of-the-art models, they are often not fully disclosed. Creating training data for high-performing LLMs involves extensive cleaning and deduplication to ensure the necessary level of quality. The lack of transparency for training data has thus hampered research on attributing and addressing hallucination and bias issues in LLMs, hindering replication efforts and further advancements in the community. These challenges become even more pronounced in multilingual learning scenarios, where the available multilingual text datasets are often inadequately collected and cleaned. Consequently, there is a lack of open-source and readily usable dataset to effectively train LLMs in multiple languages. To overcome this issue, we present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for LLM development. Our dataset undergoes meticulous cleaning and deduplication through a rigorous pipeline of multiple stages to accomplish the best quality for model training, including language identification, URL-based filtering, metric-based cleaning, document refinement, and data deduplication. CulturaX is fully released to the public in HuggingFace to facilitate research and advancements in multilingual LLMs: [https://huggingface.co/datasets/uonlp/CulturaX]","University of Oregon, USA; Adobe Research, USA","nlp/corpus-construction, dataset-creation, nlp/large-language-models","","","","Tensorflow-C4-Multilingual, OSCAR","" +"Thuat Nguyen, Chien Van Nguyen, Viet Dac Lai, Hieu Man, Nghia Trung Ngo, Franck Dernoncourt, Ryan A. Rossi, Thien Huu Nguyen – University of Oregon, USA; Adobe Research, USA","CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages","https://arxiv.org/pdf/2309.09400.pdf","papers","20230101Z00:00:00","","The driving factors behind the development of large language models (LLMs) with impressive learning capabilities are their colossal model sizes and extensive training datasets. Along with the progress in natural language processing, LLMs have been frequently made accessible to the public to foster deeper investigation and applications. However, when it comes to training datasets for these LLMs, especially the recent state-of-the-art models, they are often not fully disclosed. Creating training data for high-performing LLMs involves extensive cleaning and deduplication to ensure the necessary level of quality. The lack of transparency for training data has thus hampered research on attributing and addressing hallucination and bias issues in LLMs, hindering replication efforts and further advancements in the community. These challenges become even more pronounced in multilingual learning scenarios, where the available multilingual text datasets are often inadequately collected and cleaned. Consequently, there is a lack of open-source and readily usable dataset to effectively train LLMs in multiple languages. To overcome this issue, we present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for LLM development. Our dataset undergoes meticulous cleaning and deduplication through a rigorous pipeline of multiple stages to accomplish the best quality for model training, including language identification, URL-based filtering, metric-based cleaning, document refinement, and data deduplication. CulturaX is fully released to the public in HuggingFace to facilitate research and advancements in multilingual LLMs: [https://huggingface.co/datasets/uonlp/CulturaX]","University of Oregon, USA; Adobe Research, USA","nlp/corpus-construction, dataset-creation, nlp/large-language-models","","","CulturaX","Tensorflow-C4-Multilingual, OSCAR","" "Sneha Kudugunta, Isaac Caswell, Biao Zhang, Xavier Garcia, Christopher A. Choquette-Choo, Katherine Lee, Derrick Xin, Aditya Kusupati, Romi Stella, Ankur Bapna, Orhan Firat – Google DeepMind; Google Research","MADLAD-400: A Multilingual And Document-Level Large Audited Dataset","https://arxiv.org/pdf/2309.04662.pdf","papers","20230101Z00:00:00","","We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models 1 available to the research community.","Google DeepMind; Google Research","nlp/corpus-construction, dataset-creation, nlp/large-language-models","A common approach to creating such datasets is to mine language specific data from general web crawls such as CommonCrawl [57, 43, 68] to create datasets. We simply take this approach and scale it. We train a document-level LangID model on 498 languages to obtain CommonCrawl annotations at a document level and obtain a 5-trillion token, document-level monolingual dataset. [...] First, we collect as large a dataset of unlabeled web text as possible. More specifically, we use all available snapshots of CommonCrawl2 as of August 20, 2022. After some preliminary data cleaning, we use a highly multilingual LangID model to provide document-level annotations (Section 2.2). Finally, we conduct a self-audit (Section 2.4), or quality review, of this preliminary dataset partitioned by language, and design filters to remove noisy content. When appropriate, we correct language names and remove languages from the preliminary dataset. We note that building MADLAD-400 was an iterative process, and that while we describe one major quality review in depth, we conducted several stages of filtering.","","MADLAD-400","","" "Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, Jimmy Ba – University of Toronto, Canada; University of Cambridge, United Kingdom; Princeton University, USA","OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text","https://arxiv.org/abs/2310.06786","papers","20230101Z00:00:00","","There is growing evidence that pretraining on high quality, carefully thought-out tokens such as code or mathematics plays an important role in improving the reasoning abilities of large language models. For example, Minerva, a PaLM model finetuned on billions of tokens of mathematical documents from arXiv and the web, reported dramatically improved performance on problems that require quantitative reasoning. However, because all known open source web datasets employ preprocessing that does not faithfully preserve mathematical notation, the benefits of large scale training on quantitive web documents are unavailable to the research community. We introduce OpenWebMath, an open dataset inspired by these works containing 14.7B tokens of mathematical webpages from Common Crawl. We describe in detail our method for extracting text and LaTeX content and removing boilerplate from HTML documents, as well as our methods for quality filtering and deduplication. Additionally, we run small-scale experiments by training 1.4B parameter language models on OpenWebMath, showing that models trained on 14.7B tokens of our dataset surpass the performance of models trained on over 20x the amount of general language data. We hope that our dataset, openly released on the Hugging Face Hub, will help spur advances in the reasoning abilities of large language models.","University of Toronto, Canada; University of Cambridge, United Kingdom; Princeton University, USA","mathematics, mathematical text, nlp/corpus-construction, dataset-creation, nlp/large-language-models","We extract documents from Common Crawl¹ [¹ https://commoncrawl.org/], applying our pipeline to extract text while preserving mathematical content in the form of LATEX equations. We then filter the documents, ensuring that only high-quality English mathematical documents are kept. Finally, we deduplicate the dataset, resulting in 14.7B tokens of high-quality mathematical content suitable for both pretraining and finetuning large language models.","","OpenWebMath","","" "Minh-Hoang Dang, Alban Gaignard, Hala Skaf-Molli, Pascal Molli – Nantes Université, France","Schema.org: How is it used?","https://hal.science/hal-04250523/document","papers","20230101Z00:00:00","","Schema.org defines a shared vocabulary for semantically annotating web pages. Due to the vast and diverse nature of the contributed annotations, it is not easy to understand the widespread use of Schema.org. In this poster, we rely on the characteristic sets computed from the web data commons datasets to provide insights into property combinations on various websites. Thanks to in-depth experiments, this poster establishes a comprehensive observatory for schema.org annotations, visually presenting the most frequently used classes, commonly used combinations of properties per class, the average number of filled properties per class, and the classes with the greatest property coverage. These findings are valuable for both the communities involved in defining Schema.org vocabularies and the users of these vocabularies.","Nantes Université, France","semantic web, linked data","The Web Data Commons [3, 4] project extracts semantic annotations from the Common Crawl annually since 20102. It provides a reference dataset to study the evolution and adoption of semantic annotations in web pages. The extracted data is represented with RDF quads, which consist of RDF statements along with the URL of the corresponding web page. The abundance of annotations on the web and the diversity of contributors raise challenges in understanding how Schema.org is used at the web-scale. [...] We used the JSON-LD (most common formats) dataset from the WebDataCommons [3 ] released in October 2021. This dataset is derived from crawling 35 million websites, of which 42% utilized Web Entities. It comprises 82 billion RDF quads (16 terabytes uncompressed) and 6.7 billion Schema.org entities.","","","WDC-triples","" @@ -372,5 +415,11 @@ "Gus Eggert, Kevin Huo, Mike Biven, Justin Waugh – Approximate Labs, Boulder, USA","TabLib: A Dataset of 627M Tables with Context","https://arxiv.org/pdf/2310.07875.pdf","papers","20230101Z00:00:00","","It is well-established that large, diverse datasets play a pivotal role in the performance of modern AI systems for text and image modalities. However, there are no datasets for tabular data of comparable size and diversity to those available for text and images. Thus we present {""}TabLib'', a compilation of 627 million tables totaling 69 TiB, along with 867B tokens of context. TabLib was extracted from numerous file formats, including CSV, HTML, SQLite, PDF, Excel, and others, sourced from GitHub and Common Crawl. The size and diversity of TabLib offer considerable promise in the table modality, reminiscent of the original promise of foundational datasets for text and images, such as The Pile and LAION.","Approximate Labs, Boulder, USA","dataset creation, web tables","We used the latest crawl at the time, which was CC-MAIN-2023-23. Common Crawl results are serialized using the WARC format, which includes “request” and “response” records. We only considered response records. We discarded “truncated” responses which had response lengths that exceed Common Crawl’s limit. If a WARC-Identified-Payload- Type record header was included in the record, then we used its mimetype as a hint for detecting the content type, otherwise we used the Content-Type header in the HTTP response, and followed a similar approach as GitHub (use the mimetype if possible, otherwise use libmagic). About 20% of WARC files were dropped due to issues parsing certain HTML elements with Pandas.","","","","" "Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou – Alibaba Group","Data-Juicer: A One-Stop Data Processing System for Large Language Models","https://arxiv.org/pdf/2309.02033.pdf","papers","20230101Z00:00:00","","The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.","Alibaba Group","dataset creation, nlp/corpus-construction, nlp/large-language-models","","","","","" "Wang Tongjing, Evert Meijers, Ziyu Bao, Huijuan Wang – Utrecht University, The Netherlands; Delft University of Technology, The Netherlands","Intercity networks and urban performance: a geographical text mining approach","https://www.tandfonline.com/doi/pdf/10.1080/12265934.2023.2253193","papers","20230101Z00:00:00","","Compared to the burgeoning literature discussing the importance of agglomeration externalities for development, limited attention has been given to network externalities. This is largely due to limited data availability. We propose a general measure to proxy city network externalities based on toponym co-occurrences that indicate the relatedness between cities. This paper extracts intercity relationships based on the co-occurrence of Chinese place names on 2.5 billion webpages. We calculate and map absolute and relative network positions, which we use to explain urban labour productivity. We found that a stronger embeddedness in networks of cities is significantly and positively associated with urban productivity. Smaller cities benefit comparatively more from being well embedded in city networks, suggesting that these relations can compensate for a lack of agglomeration externalities. We also compare the importance for urban performance of city network externalities vis-à-vis agglomeration externalities. City network externalities turn out to be more important in explaining urban performance than agglomeration externalities. This calls for new theorizing on a relational approach to urban and regional development. Rather than stimulating further concentration of urbanization, our findings suggest that fostering relationships between cities is a viable alternative urban development strategy. We conclude with suggestions for a research agenda that delves deeper into city network externalities.","Utrecht University, The Netherlands; Delft University of Technology, The Netherlands","geography, web-mining, dataset creation","[...] like Meijers and Peris we prefer using corpora from the CommonCrawl Archive of webpages as our text corpus. We used the entire April 2019 database for processing and conducting experiments. The original database we extracted contains about 6.98 TB of uncompressed text containing 2.5 billion web pages crawled between 18 and 26 April 2019. We selected all pages using at least 10 Chinese characters. The filtered corpus contains approximately 110 billion Chinese words on 91 million pages from 1067 different domains. Over 91% of the tokens are from websites registered under the four top-level domains (TLD): .com (62.23%), .cn (14.80%), .net (7.86%) and .org (2.68%). The four TLDs make up about 87.57% of pages.","","","","" -"Mikko Aulamo, Nikolay Bogoychev, Shaoxiong Ji, Graeme Nail, Gema Ram{\'\i}rez-S{\'a}nchez, J{\""o}rg Tiedemann, Jelmer Van Der Linde, Jaume Zaragoza – University of Helsinki, ; University of Edinburgh, United Kingdom; Prompsit Language Engineering","HPLT: High Performance Language Technologies","https://aclanthology.org/2023.eamt-1.61.pdf","papers","20230101Z00:00:00","","HPLT: High Performance Language Technologies Mikko Aulamo⋆, Nikolay Bogoychev‡, Shaoxiong Ji⋆, Graeme Nail‡, Gema Ram´ırez-S´anchez†, J¨org Tiedemann⋆, Jelmer van der Linde‡, Jaume Zaragoza† ⋆University of Helsinki, ‡University of Edinburgh, †Prompsit Language Engineering https://hplt-project.org/ Abstract We describe the High Performance Language Technologies project (HPLT), a 3-year EU-funded project started in September 2022. HPLT will build a space combining petabytes of natural language data with large-scale model training. It will derive monolingual and bilingual datasets from the Internet Archive and CommonCrawl and build efficient and solid machine translation (MT) as well as large language models (LLMs). HPLT aims at providing free, sustainable and reusable datasets, models and workflows at scale using high-performance computing (HPC).","University of Helsinki, ; University of Edinburgh, United Kingdom; Prompsit Language Engineering","nlp/corpus-construction, nlp/large-language-models","Datasets: Starting from 7 PB of web-crawled data from the Internet Archive3 and 5 from CommonCrawl,4 we will derive monolingual and bilin- gual datasets for systematic LLM and MT building with a large language coverage.","","","","" +"Mikko Aulamo, Nikolay Bogoychev, Shaoxiong Ji, Graeme Nail, Gema Ramírez-Sánchez, Jörg Tiedemann, Jelmer Van Der Linde, Jaume Zaragoza – University of Helsinki, ; University of Edinburgh, United Kingdom; Prompsit Language Engineering","HPLT: High Performance Language Technologies","https://aclanthology.org/2023.eamt-1.61.pdf","papers","20230101Z00:00:00","","We describe the High Performance Language Technologies project (HPLT), a 3-year EU-funded project started in September 2022. HPLT will build a space combining petabytes of natural language data with large-scale model training. It will derive monolingual and bilingual datasets from the Internet Archive and CommonCrawl and build efficient and solid machine translation (MT) as well as large language models (LLMs). HPLT aims at providing free, sustainable and reusable datasets, models and workflows at scale using high-performance computing (HPC).","University of Helsinki, ; University of Edinburgh, United Kingdom; Prompsit Language Engineering","nlp/corpus-construction, nlp/large-language-models","Datasets: Starting from 7 PB of web-crawled data from the Internet Archive3 and 5 from CommonCrawl,4 we will derive monolingual and bilin- gual datasets for systematic LLM and MT building with a large language coverage.","","","","" "Raffaele Sommese, Roland van Rijswijk-Deij, Mattijs Jonker – University of Twente, The Netherlands","This Is a Local Domain: On Amassing Country-Code Top-Level Domains from Public Data","https://arxiv.org/pdf/2309.01441.pdf","papers","20230101Z00:00:00","","Domain lists are a key ingredient for representative censuses of the Web. Unfortunately, such censuses typically lack a view on domains under country-code top-level domains (ccTLDs). This introduces unwanted bias: many countries have a rich local Web that remains hidden if their ccTLDs are not considered. The reason ccTLDs are rarely considered is that gaining access -- if possible at all -- is often laborious. To tackle this, we ask: what can we learn about ccTLDs from public sources? We extract domain names under ccTLDs from 6 years of public data from Certificate Transparency logs and Common Crawl. We compare this against ground truth for 19 ccTLDs for which we have the full DNS zone. We find that public data covers 43%-80% of these ccTLDs, and that coverage grows over time. By also comparing port scan data we then show that these public sources reveal a significant part of the Web presence under a ccTLD. We conclude that in the absence of full access to ccTLDs, domain names learned from public sources can be a good proxy when performing Web censuses.","University of Twente, The Netherlands","dataset creation, internet domain names, ccTLDs, country-code top-level domains","Common Crawl – Common Crawl is a nonprofit organization that builds and maintains a sizable, open repository of Web crawl data, offering years and petabytes of Web page data. The Common Crawl data lives in Amazon S3 as part of Amazon’s Open Data Sponsorship Program and is free for anyone to access. Crawls are seeded from a set of candidate domain names and the crawler follows links leading to other pages. Crawls are performed approximately every one to two months and contain raw Web page data, metadata and text extractions, among others. Relevant to our work, crawls accumulate many tens of millions of registered domain names that one can extract from the so-called URL index. [...] For Common Crawl we consider data for crawl snapshots dated between June 2017 and June 2023 (inclusive). There are 58 such snapshots, collectively accounting for 127 million registered domain names. The combined total number of unique registered domain names in our consolidated dataset is 430 million.","","","","" +"Isaac Caswell, Lisa Wang, Isabel Papadimitriou – Google Research; Google DeepMind; Computer Science Department, Stanford University","Separating the Wheat from the Chaff with BREAD: An open-source benchmark and metrics to detect redundancy in text","https://arxiv.org/abs/2311.06440","papers","20230101Z00:00:00","","Data quality is a problem that perpetually resurfaces throughout the field of NLP, regardless of task, domain, or architecture, and remains especially severe for lower-resource languages. A typical and insidious issue, affecting both training data and model output, is data that is repetitive and dominated by linguistically uninteresting boilerplate, such as price catalogs or computer-generated log files. Though this problem permeates many web-scraped corpora, there has yet to be a benchmark to test against, or a systematic study to find simple metrics that generalize across languages and agree with human judgements of data quality. In the present work, we create and release BREAD, a human-labeled benchmark on repetitive boilerplate vs. plausible linguistic content, spanning 360 languages. We release several baseline CRED (Character REDundancy) scores along with it, and evaluate their effectiveness on BREAD. We hope that the community will use this resource to develop better filtering methods, and that our reference implementations of CRED scores can become standard corpus evaluation tools, driving the development of cleaner language modeling corpora, especially in low-resource languages.","Google Research; Google DeepMind; Computer Science Department, Stanford University","nlp/corpus-construction, data quality, nlp/boilerplate-removal, redundancy","BREAD consists of randomly-chosen documents from the multilingual, common-crawl- based MADLAD-400 dataset (Kudugunta et al., 2023), which are then annotated by expert NLP- practitioner annotators.","","","MADLAD-400","" +"Sneha Kudugunta, Isaac Rayburn Caswell, Biao Zhang, Xavier Garcia, Derrick Xin, Aditya Kusupati, Romi Stella, Ankur Bapna, Orhan Firat – Google DeepMind; Google Research","MADLAD-400: A Multilingual And Document-Level Large Audited Dataset","https://openreview.net/forum?id=Y45ZCxslFx","papers","20230101Z00:00:00","","We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community.","Google DeepMind; Google Research","nlp/corpus-construction, nlp/multi-lingual-corpus","A common approach to creating such datasets is to mine language specific data from general web crawls such as CommonCrawl [52, 38, 63] to create datasets. We simply take this approach and scale it. We train a document-level LangID model on 498 languages to obtain CommonCrawl annotations at a document level and obtain a 5-trillion token, document-level monolingual dataset.¶ [...] First, we collect as large a dataset of unlabeled web text as possible. More specifically, we use all available snapshots of CommonCrawl3 as of August 20, 2022. After some preliminary data cleaning, we use a highly multilingual LangID model to provide document-level annotations (Section 2.2). Finally, we conduct a self-audit (Section 2.4), or quality review, of this preliminary dataset partitioned by language, and design filters to remove noisy content. When appropriate, we correct language names and remove languages from the preliminary dataset. We note that building MADLAD-400 was an iterative process, and that while we describe one major quality review in depth, we conducted several stages of filtering.","","MADLAD-400","","" +"Xian Gong, Paul X. Mccarthy, Marian-Andrei Rizoiu, Paolo Boldi – University of Technology, Australia; University of New South Wales, Australia; Università degli Studi di Milano, Italy","Harmony in the Australian Domain Space","https://doi.org/10.1145/3614419.3643998","papers","20240101Z00:00:00","","In this paper we use for the first time a systematic approach in the study of harmonic centrality at a Web domain level, and gather a number of significant new findings about the Australian web. In particular, we explore the relationship between economic diversity at the firm level and the structure of the Web within the Australian domain space, using harmonic centrality as the main structural feature. The distribution of harmonic centrality values is analyzed over time, and we find that the distributions exhibit a consistent pattern across the different years. The observed distribution is well captured by a partition of the domain space into six clusters; the temporal movement of domain names across these six positions yields insights into the Australian Domain Space and exhibits correlations with other non-structural characteristics. From a more global perspective, we find a significant correlation between the median harmonic centrality of all domains in each OECD country and one measure of global trust, the WJP Rule of Law Index. Further investigation demonstrates that 35 countries in OECD share similar harmonic centrality distributions. The observed homogeneity in distribution presents a compelling avenue for exploration, potentially unveiling critical corporate, regional, or national insights.","University of Technology, Australia; University of New South Wales, Australia; Università degli Studi di Milano, Italy","","There are many public collections of web crawls, but one that is known for being very reliable and quite wide in scope is the Common Crawl1. Common Crawl’s measurements are preferred for web and network analysis due to their extensive coverage, regular updates, and large-scale, publicly accessible datasets, which reduces the need for resource-intensive data collection and is applicable across various research in a reproducible way. [...]","","","","" +"Peter Carragher, Evan M. Williams, Kathleen M. Carley – Carnegie Mellon University, USA","Misinformation Resilient Search Rankings with Webgraph-based Interventions","https://doi.org/10.1145/3670410","papers","20240101Z00:00:00","search engine optimization, misinformation, website reliability, pagerank","The proliferation of unreliable news domains on the internet has had wide-reaching negative impacts on society. We introduce and evaluate interventions aimed at reducing traffic to unreliable news domains from search engines while maintaining traffic to reliable domains. We build these interventions on the principles of fairness (penalize sites for what is in their control), generality (label/fact-check agnostic), targeted (increase the cost of adversarial behavior), and scalability (works at webscale). We refine our methods on small-scale webdata as a testbed and then generalize the interventions to a large-scale webgraph containing 93.9M domains and 1.6B edges. We demonstrate that our methods penalize unreliable domains far more than reliable domains in both settings and we explore multiple avenues to mitigate unintended effects on both the small-scale and large-scale webgraph experiments. These results indicate the potential of our approach to reduce the spread of misinformation and foster a more reliable online information ecosystem. This research contributes to the development of targeted strategies to enhance the trustworthiness and quality of search engine results, ultimately benefiting users and the broader digital community.","Carnegie Mellon University, USA","web-science/hyperlinkgraph, misinformation, disinformation, domain-ranking","","","","","" +"Tommaso Fontana, Sebastiano Vigna, Stefano Zacchiroli – Inria, DGDI, Paris, France; Università degli Studi di Milano, Dipartimento di Informatica, Milan, Italy; LTCI, Télécom Paris, Institut Polytechnique de Paris, Palaiseau, France","WebGraph: The Next Generation (Is in Rust)","https://doi.org/10.1145/3589335.3651581","papers","20240101Z00:00:00","","","Inria, DGDI, Paris, France; Università degli Studi di Milano, Dipartimento di Informatica, Milan, Italy; LTCI, Télécom Paris, Institut Polytechnique de Paris, Palaiseau, France","web-science/hyperlinkgraph; graph-processing; programming-languages/Java; programming-languages/Rust; cc-cited-not-used","Moreover, open data projects such as Common Crawl and Software Heritage (SWH) [5] have used WebGraph to compress and distribute their data.","","","","" +"{Henry S} Thompson – The University of Edinburgh, Edinburgh, United Kingdom","Improved methodology for longitudinal Web analytics using Common Crawl","https://www.research.ed.ac.uk/en/publications/improved-methodology-for-longitudinal-web-analytics-using-common-","papers","20240101Z00:00:00","","Common Crawl is a multi-petabyte longitudinal dataset containing over 100 billion web pages which is widely used as a source of language data for sequence model training and in web science research. Each of its constituent archives is on the order of 75TB in size. Using it for research, particularly longitudinal studies, which necessarily involve multiple archives, is therefore very expensive in terms of compute time and storage space and/or web bandwidth. Two new methods for mitigating this problem are presented here, based on exploiting and extending the much smaller (<200 gigabytes (GB) compressed) index which is available for each archive. By adding Last-Modified timestamps to the index we enable longitudinal exploration using only a single archive. By comparing the distribution of index features for each of the 100 segments into which archive is divided with their distribution over the whole archive, we have identified the least and most representative segments for a number of recent archives. Using this allows the segment(s) that are most representative of an archive to be used as proxies for the whole. We illustrate this approach in an analysis of changes in URI length over time, leading to an unanticipated insight into the how the creation of Web pages has changed over time.","The University of Edinburgh, Edinburgh, United Kingdom","web-archiving, web-dataset","","","","",""