Upload 14 files
Browse files- commoncrawl_citations_annotated_2010.csv +2 -0
- commoncrawl_citations_annotated_2012.csv +2 -0
- commoncrawl_citations_annotated_2013.csv +7 -0
- commoncrawl_citations_annotated_2014.csv +28 -0
- commoncrawl_citations_annotated_2015.csv +14 -0
- commoncrawl_citations_annotated_2016.csv +3 -0
- commoncrawl_citations_annotated_2017.csv +14 -0
- commoncrawl_citations_annotated_2018.csv +0 -0
- commoncrawl_citations_annotated_2019.csv +14 -0
- commoncrawl_citations_annotated_2020.csv +20 -0
- commoncrawl_citations_annotated_2021.csv +27 -0
- commoncrawl_citations_annotated_2022.csv +37 -0
- commoncrawl_citations_annotated_2023.csv +0 -0
- commoncrawl_citations_annotated_2024.csv +5 -0
commoncrawl_citations_annotated_2010.csv
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited
|
2 |
+
Ahad Rana – Common Crawl,Common Crawl – Building an open web-scale crawl using Hadoop,https://www.slideshare.net/hadoopusergroup/common-crawlpresentation,papers,20100101Z00:00:00,,,Common Crawl,"web-crawling, big data, Hadoop",,,,,
|
commoncrawl_citations_annotated_2012.csv
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited
|
2 |
+
"Hannes Mühleisen, Christian Bizer – Freie Universität, Berlin, Germany",Web Data Commons – Extracting Structured Data from Two Large Web Corpora,http://ceur-ws.org/Vol-937/ldow2012-inv-paper-2.pdf,papers,20120101Z00:00:00,,,"Freie Universität, Berlin, Germany",,,,,,
|
commoncrawl_citations_annotated_2013.csv
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited
|
2 |
+
"Alexandra Birch, Nadir Durrani, Phillip Koehn – School of Informatics, University of Edinburgh",Edinburgh SLT and MT System Description for the IWSLT 2013,http://workshop2013.iwslt.org/downloads/Edinburgh_SLT_and_MT_System_Description_for_the_IWSLT_2013_Evaluation.pdf,papers,20130101Z00:00:00,,,"School of Informatics, University of Edinburgh",,,,,,
|
3 |
+
"Jason R. Smith, Herve Saint-Amand, Magdalena Plamada, Phillipp Koehn, Chris Callison-Burch, Adam Lopez – Johns Hopkins University, University of Edinburgh, University of Zurich, University of Pennsylvania",Dirt Cheap Web-Scale Parallel Text from the Common Crawl,http://www.cs.jhu.edu/~ccb/publications/bitexts-from-common-crawl.pdf,papers,20130101Z00:00:00,,,"Johns Hopkins University, University of Edinburgh, University of Zurich, University of Pennsylvania",,,,,,
|
4 |
+
"Sara Stymne, Christian Hardmeier, Jorg Tiedemann, Joakim Nivre – Uppsala University: Department of Linguistics and Philology",Tunable Distortion Limits and Corpus Cleaning for SMT,http://statmt.org/wmt13/pdf/WMT29.pdf,papers,20130101Z00:00:00,,,Uppsala University: Department of Linguistics and Philology,,,,,,
|
5 |
+
"Thanh-Le Ha, Teresa Herrmann, Jan Niehues, Mohammed Mediani, Eunah Cho, Yuqi Zhang, Isabel Slawik, Alex Waibel – Institute for Anthropomatics",The KIT Translation Systems for IWSLT 2013,http://workshop2013.iwslt.org/downloads/The_KIT_Translation_Systems_for_IWSLT_2013.pdf,papers,20130101Z00:00:00,,,Institute for Anthropomatics,,,,,,
|
6 |
+
"Wanno Drijfhout, Oliver Jundt, Lesley Wevers, Djoerd Hiemstra – University of Twente",Traitor: Associating Concepts using the World Wide Web,http://doc.utwente.nl/88328/,papers,20130101Z00:00:00,,,University of Twente,,,,,,
|
7 |
+
"Christian Bizer, Kai Eckert, Robert Meusel, Hannes Mühleisen, Michael Schuhmacher, Johanna Völker – Data and Web Science Group – University of Mannhein, Database Architectures Group, Centrum Wiskunde & Informatica, Netherlands","Deployment of RDFa, Microdata, and Microformats on the Web – A Quantitative Analysis",http://hannes.muehleisen.org/Bizer-etal-DeploymentRDFaMicrodataMicroformats-ISWC-InUse-2013.pdf,papers,20130101Z00:00:00,,,"Data and Web Science Group – University of Mannhein, Database Architectures Group, Centrum Wiskunde & Informatica, Netherlands",,,,,,
|
commoncrawl_citations_annotated_2014.csv
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited
|
2 |
+
"Jeffrey Pennington, Richard Socher, Christopher D. Manning – Stanford University, California, USA",GloVe: Global vectors for word representation,https://aclanthology.org/D14-1162.pdf,papers,20140101Z00:00:00,,,"Stanford University, California, USA",nlp/word-embeddings,"We trained our model on five corpora of varying sizes: [...] and on 42 billion tokens of web data, from Common Crawl⁵ [⁵ To demonstrate the scalability of the model, we also trained it on a much larger sixth corpus, containing 840 billion tokens of web data, but in this case we did not lowercase the vocabulary, so the results are not directly comparable.].",,,,
|
3 |
+
"Mohammed Mediani, Joshua Winebarger, Alexander Waibel – Karlsruhe Institute of Technology, Germany",Improving In-Domain Data Selection For Small In-Domain Sets,http://www.statmt.org/OSMOSES/IWSLT-36.pdf,papers,20140101Z00:00:00,,,"Karlsruhe Institute of Technology, Germany",,,,,,
|
4 |
+
"Junfei Guo, Juan Liu, Qi Han, Andreas Maletti – School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany",A Tunable Language Model for Statistical Machine Translation,http://www.ims.uni-stuttgart.de/institut/mitarbeiter/maletti/pub/guoliuhanmal14.pdf,papers,20140101Z00:00:00,,,"School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany",,,,,,
|
5 |
+
"Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, Andrew Y. Ng – Baidu Research – Silicon Valley AI Lab",Deep Speech: Scaling up end-to-end speech recognition,http://arxiv.org/pdf/1412.5567v2.pdf,papers,20140101Z00:00:00,,,Baidu Research – Silicon Valley AI Lab,,,,,,
|
6 |
+
"Eva Hasler, Philipp Koehn, Barry Haddow, Phil Blunsom – University of Edinburgh; University of Oxford",Dynamic Topic Adaptation for Phrase-based MT,http://www.aclweb.org/anthology/E/E14/E14-1035.pdf,papers,20140101Z00:00:00,,,University of Edinburgh; University of Oxford,,,,,,
|
7 |
+
Michele Tortelli – Politecnico di Bari,Bloom filter-based Routing in NDN,http://www.poliba.it/Didattica/docs/scorepoliba2014_submission_179.pdf,papers,20140101Z00:00:00,,,Politecnico di Bari,,,,,,
|
8 |
+
"Filip Ginter, Jenna Kanerva – University of Turku",Fast Training of word2vec Representations Using N-gram Corpora,http://www2.lingfil.uu.se/SLTC2014/abstracts/sltc2014_submission_27.pdf,papers,20140101Z00:00:00,,,University of Turku,,,,,,
|
9 |
+
"Petar Petrovski, Volha Bryl, Christian Bizer – University of Mannheim, Germany- Research Group Data and Web Science",Learning Regular Expressions for the Extraction of Product Attributes from E-commerce Microdata,http://ceur-ws.org/Vol-1267/LD4IE2014_Petrovski.pdf,papers,20140101Z00:00:00,,,"University of Mannheim, Germany- Research Group Data and Web Science",,,,,,
|
10 |
+
"Robert Meusel, Petar Petrovski, Christian Bizer – University of Mannheim, Germany- Research Group Data and Web Science","The Web Data Commons Microdata, RDFa and Microformat Dataset Series",http://link.springer.com/chapter/10.1007/978-3-319-11964-9_18#page-1,papers,20140101Z00:00:00,,,"University of Mannheim, Germany- Research Group Data and Web Science",,,,,,
|
11 |
+
"Robert Meusel, Peter Mika, Roi Blanko – University of Mannheim; Yahoo Labs- Barcelona",Focused Crawling for Structured Data,http://dl.acm.org/citation.cfm?id=2661902,papers,20140101Z00:00:00,,,University of Mannheim; Yahoo Labs- Barcelona,,,,,,
|
12 |
+
"Chenchen Ding, Masao Utiyama, Eiichiro Sumita – National Institute of Information and Communications Technology Japan",Document-level Re-ranking with Soft Lexical and Semantic Features for Statistical Machine Translation,http://www.mibel.cs.tsukuba.ac.jp/~tei/AMTA2014.pdf,papers,20140101Z00:00:00,,,National Institute of Information and Communications Technology Japan,,,,,,
|
13 |
+
"Masumi Shirakawa, Kotaro Nakayama, Eiji Aramaki, Takahiro Hara, Shojiro Nishio – Osaka University",Collecting Conceptualized Relations from Terabytes of Web Texts for Understanding Unknown Terms,http://dl.acm.org/citation.cfm?id=2682777,papers,20140101Z00:00:00,,,Osaka University,,,,,,
|
14 |
+
"Jenna Kanerva, Juhani Luotolahti, Veronika Laippala, Filip Ginter – University of Turku",Syntactic N-gram Collection from a Large-Scale Corpus of Internet Finnish,http://ebooks.iospress.nl/volumearticle/38025,papers,20140101Z00:00:00,,,University of Turku,,,,,,
|
15 |
+
"Willem Robert van Hage, Thomas Ploeger, Jesper Hoeksema – SynerScope B.V., VU University Amsterdam",Number frequency on the web,http://dl.acm.org/citation.cfm?id=2576962,papers,20140101Z00:00:00,,,"SynerScope B.V., VU University Amsterdam",,,,,,
|
16 |
+
"Christian Buck, Kenneth Heafield, Bas van Ooyen – University of Edinburgh, Stanford University, Owlin BV",N-gram Counts and Language Models from the Common Crawl,http://statmt.org/ngrams/BuckEtAl_LREC2014_CommonCrawlLM.pdf,papers,20140101Z00:00:00,,,"University of Edinburgh, Stanford University, Owlin BV",,,,,,
|
17 |
+
"Christian Hardmeier, Sara Stymne, Jörg Tiedemann, Aaron Smith, Joakim Nivre – Uppsala University: Department of Linguistics and Philology",Anaphora Models and Reordering for Phrase-Based SMT,http://acl2014.org/acl2014/W14-33/pdf/W14-3312.pdf,papers,20140101Z00:00:00,,,Uppsala University: Department of Linguistics and Philology,,,,,,
|
18 |
+
"Lane O. B. Schwartz, Timothy Anderson, Jeremy Gwinnup, Katherine M. Young – Air Force Research Laboratory, SRA International, N-Space Analysis LLC",Machine Translation and Monolingual Postediting:The AFRL WMT-14 System,http://www.ling.uni-potsdam.de/~koller/aclpub/W14-33/cdrom/pdf/W14-3321.pdf,papers,20140101Z00:00:00,,,"Air Force Research Laboratory, SRA International, N-Space Analysis LLC",,,,,,
|
19 |
+
"Hoang Cuong, Khalil Sima’an – University of Amsterdam - Institute for Logic, Language and Computation",Latent Domain Translation Models in Mix-of-Domains Haystack,http://www.aclweb.org/anthology/C/C14/C14-1182.pdf,papers,20140101Z00:00:00,,,"University of Amsterdam - Institute for Logic, Language and Computation",,,,,,
|
20 |
+
"Thomas Steiner, Hannes Mühleisen, Ruben Verborgh, Pierre-Antoine Champin, Benoît Encelle, Yannick Prié – Université de Lyon, Database Architectures Group; Multimedia Lab, Ghent University; iMinds, Université de Nantes",Weaving the Web(VTT) of Data,http://telemedicina.unifesp.br/pub/Events/2013-05%20-%20WWW2013/www2013/www2013.org/companion/p1399.pdf,papers,20140101Z00:00:00,,,"Université de Lyon, Database Architectures Group; Multimedia Lab, Ghent University; iMinds, Université de Nantes",,,,,,
|
21 |
+
"Marcin Wylot, Philippe Cudré-Mauroux, Paul Groth – eXascale Infolab, University of Fribourg; VU University Amsterdam",TripleProv: Efficient Processing of Lineage Queries in a Native RDF Store,http://exascale.info/sites/default/files/TipleProv.pdf,papers,20140101Z00:00:00,,,"eXascale Infolab, University of Fribourg; VU University Amsterdam",,,,,,
|
22 |
+
"Robert Meusel, Sebastiano Vigna, Oliver Lehmberg, Christian Bizer – Data and Web Science Group - University of Mannheim, Laboratory for Web - Algorithmics Università degli Studi di Milano",Graph Structure in the Web — Revisited,http://vigna.di.unimi.it/ftp/papers/GraphStructureRevisited.pdf,papers,20140101Z00:00:00,,,"Data and Web Science Group - University of Mannheim, Laboratory for Web - Algorithmics Università degli Studi di Milano",,,,,,
|
23 |
+
"Calvin Ardi, John Heidemann – USC/Information Sciences Institute",Web-scale Content Reuse Detection,ftp://ftp.isi.edu/isi-pubs/tr-692.pdf,papers,20140101Z00:00:00,,,USC/Information Sciences Institute,,,,,,
|
24 |
+
Yuta Tsuboi – IBM Resarch,Neural Networks Leverage Corpus-wide Information for Part-of-speech Tagging,http://2boy.org/~yuta/publications/neuraltagger-emnlp2014-tsuboi.pdf,papers,20140101Z00:00:00,,,IBM Resarch,,,,,,
|
25 |
+
"Mauro Cettolo, Nicola Bertoldi, Marcello Federico, Holger Schwenk, Loïc Barrault, Christophe Servan – Fondazione Bruno Kessler, University of Le Mans, Xerox Research Centre Europe",Translation project adaptation for MT-enhanced computer assisted translation,http://link.springer.com/article/10.1007/s10590-014-9152-1,papers,20140101Z00:00:00,,,"Fondazione Bruno Kessler, University of Le Mans, Xerox Research Centre Europe",,,,,,
|
26 |
+
"Germán Sanchis-Trilles, Daniel Ortiz-Martınez, Francisco Casacuberta – PRHLT Centre - Universidad Politécnica de Valencia",Efficient Wordgraph Pruning for Interactive Translation Prediction,http://www.casmacat.eu/uploads/Main/2eamt2014.pdf,papers,20140101Z00:00:00,,,PRHLT Centre - Universidad Politécnica de Valencia,,,,,,
|
27 |
+
"Vasilis Kolias, Ioannis Anagnostopoulos, Eleftherios Kayafas – National Technical University of Athens, University of Thessaly",Exploratory Analysis of a Terabyte Scale Web Corpus,http://arxiv.org/abs/1409.5443,papers,20140101Z00:00:00,,,"National Technical University of Athens, University of Thessaly",,,,,,
|
28 |
+
"Masahiro Mizukami, Graham Neubig, Sakriani Sakti, Tomoki Toda, Satoshi Nakamura – Nara Institute of Science and Technology",Building a Free General-Domain Paraphrase Database for Japanese,http://isw3.naist.jp/~masahiro-mi/paper/ma14cocosda.pdf,papers,20140101Z00:00:00,,,Nara Institute of Science and Technology,,,,,,
|
commoncrawl_citations_annotated_2015.csv
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited
|
2 |
+
"Robert Meusel, Sebastiano Vigna, Oliver Lehmberg, Christian Bizer – University of Mannheim, Germany; Università degli Studi di Milano, Italy",The Graph Structure in the Web – Analyzed on Different Aggregation Levels,https://pdfs.semanticscholar.org/b5d5/88298e6845b4bfd40ea779ce21e628239ef3.pdf,papers,20150101Z00:00:00,,,"University of Mannheim, Germany; Università degli Studi di Milano, Italy",web-science/hyperlinkgraph,,,,,
|
3 |
+
"Alex Stolz, Martin Hepp – Universitaet der Bundeswehr Munich, Germany",Towards Crawling the Web for Structured Data: Pitfalls of Common Crawl for E-Commerce,http://ceur-ws.org/Vol-1426/paper-04.pdf,papers,20150101Z00:00:00,,,"Universitaet der Bundeswehr Munich, Germany","nlp/corpus-representativeness, semantic web, microdata, e-commerce",,,,,
|
4 |
+
"Julian Eberius, Maik Thiele, Katrin Braunschweig, Wolfgang Lehner – Technische Universität Dresden, Germany",Top-k Entity Augmentation Using Consistent Set Covering,https://www.semanticscholar.org/paper/Top-k-entity-augmentation-using-consistent-set-Eberius-Thiele/a554fe7c49837e2d2d995e00fd3b62a6ca5650f2,papers,20150101Z00:00:00,,,"Technische Universität Dresden, Germany","semantic web, web tables, web mining","To enable repeatability we publish the implementation², but also include the web table corpus used for the evaluation³. This corpus contains 100M Web tables extracted from a publicly available Web crawl⁴ [4: http://commoncrawl.org]",,{DresdenWebTableCorpus},,
|
5 |
+
"Matthew Malensek, Sangmi Lee Pallickara, Shrideep Pallickara – Colorado State University",Alleviation of Disk I/O Contention in Virtualized Settings for Data-Intensive Computing,http://galileo.cs.colostate.edu/papers/DiskInterference-BDC.pdf,papers,20150101Z00:00:00,,,Colorado State University,,,,,,
|
6 |
+
"Titus Barik, Kevin Lubick, Justin Smith, John Slankas, Emerson Murphy-Hill – ABB Corporate Research and North Carolina State University","FUSE: A Reproducible, Extendable, Internet-scale Corpus of Spreadsheets",http://kjlubick.github.io/pubs/MSR2015-Fuse_spreadsheet_corpus.pdf,papers,20150101Z00:00:00,,,ABB Corporate Research and North Carolina State University,,,,,,
|
7 |
+
"Joachim Daiber, Lautaro Quiroz, Roger Wechsler, Stella Frank – University of Amsterdam",Splitting Compounds by Semantic Analogy,https://ufal.mff.cuni.cz/~rosa/2015/docs/dmtw2015.pdf#page=26,papers,20150101Z00:00:00,,,University of Amsterdam,,,,,,
|
8 |
+
"Mikhail Galkin, Dmitry Mouromtsev, Sören Auer – IMTO University- St. Petersburg, Russia, University of Bonn- Germany",Identifying Web Tables –Supporting a Neglected Type of Content on the Web,http://arxiv.org/pdf/1503.06598.pdf,papers,20150101Z00:00:00,,,"IMTO University- St. Petersburg, Russia, University of Bonn- Germany",,,,,,
|
9 |
+
Brendan Juba – Washington University in St. Louis,Principled Sampling for Anomaly Detection,http://www.cse.wustl.edu/~bjuba/papers/anomaly_detection.pdf,papers,20150101Z00:00:00,,,Washington University in St. Louis,,,,,,
|
10 |
+
"Kowalczuk Ewa, Jedrzej Potoniec, Agnieszka Ławrynowicz – Institute of Computing Science, Poznan University of Technology, Poland",Extracting Usage Patterns of Ontologies on the Web: a Case Study on GoodRelations Vocabulary in RDFa,http://ceur-ws.org/Vol-1265/owled2014_submission_14.pdf,papers,20150101Z00:00:00,,,"Institute of Computing Science, Poznan University of Technology, Poland",,,,,,
|
11 |
+
"Junfei Guo, Juan Liu, Qi Han, Andreas Maletti – School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany",A Tunable Language Model for Statistical Machine Translation,http://www.ims.uni-stuttgart.de/institut/mitarbeiter/maletti/pub/guoliuhanmal14.pdf,papers,20150101Z00:00:00,,,"School of Computer, Wuhan University, China, Institute for Natural Language Processing, University of Stuttgart, Germany; Institute for Visualization and Interactive Systems, University of Stuttgart, Germany; Institute of Computer Science, University of Leipzig, Germany",,,,,,
|
12 |
+
"Kay Ousterhout, Ryan Rasti, Sylvia Ratnasamy, Scott Shenker, Byung-Gon Chun – UC Berkeley, ICSI, Vmware, Seoul National University",Making Sense of Performance in Data Analytics Frameworks,http://www.eecs.berkeley.edu/~keo/publications/nsdi15-final147.pdf,papers,20150101Z00:00:00,,,"UC Berkeley, ICSI, Vmware, Seoul National University",,,,,,
|
13 |
+
"Evan Jaffe, Lifeng Jin, David King, Marten van Schinjdel – Dept. of Linguistics, Ohio State University",Azmat: Sentence Similarity using Associative Matrices,http://www.ling.ohio-state.edu/~vanschm/resources/uploads/jaffe_etal-2015-semeval.pdf,papers,20150101Z00:00:00,,,"Dept. of Linguistics, Ohio State University",,,,,,
|
14 |
+
"Alexander A Alemi, Paul Ginsparg – Dept. of Physics, Cornell University, Dept. of Physics and Information Science, Cornell University",Text Segmentation based on Semantic Word Embeddings,http://arxiv.org/pdf/1503.05543.pdf,papers,20150101Z00:00:00,,,"Dept. of Physics, Cornell University, Dept. of Physics and Information Science, Cornell University",,,,,,
|
commoncrawl_citations_annotated_2016.csv
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited
|
2 |
+
"Ivan Habernal, Omnia Zayed, Iryna Gurevych – University of Darmstadt, Germany",C4Corpus: Multilingual Web-Size Corpus with Free License,http://www.lrec-conf.org/proceedings/lrec2016/pdf/388_Paper.pdf,papers,20160101Z00:00:00,,"Large Web corpora containing full documents with permissive licenses are crucial for many NLP tasks. In this article we present the construction of 12 million-pages Web corpus (over 10 billion tokens) licensed under CreativeCommons license family in 50+ languages that has been extracted from CommonCrawl, the largest publicly available general Web crawl to date with about 2 billion crawled URLs. Our highly-scalable Hadoop-based framework is able to process the full CommonCrawl corpus on 2000+ CPU cluster on the Amazon Elastic Map/Reduce infrastructure. The processing pipeline includes license identification, state-of-the-art boilerplate removal, exact duplicate and near-duplicate document removal, and language detection. The construction of the corpus is highly configurable and fully reproducible, and we provide both the framework (DKPro C4CorpusTools) and the resulting data (C4Corpus) to the research community.","University of Darmstadt, Germany","nlp/corpus-construction, legal/copyright, license/creative-commons, nlp/boilerplate-removal, ir/duplicate-detection",,CC-MAIN-2016-07,{DKPro-C4},,
|
3 |
+
"Roland Schäfer – Freie Universität Berlin, Germany",CommonCOW: Massively Huge Web Corpora from CommonCrawl Data and a Method to Distribute them Freely under Restrictive EU Copyright Laws,http://rolandschaefer.net/?p=994,papers,20160101Z00:00:00,,"In this paper, I describe a method of creating massively huge web corpora from the CommonCrawl data sets and redistributing the resulting annotations in a stand-off format. Current EU (and especially German) copyright legislation categorically forbids the redistribution of downloaded material without express prior permission by the authors. Therefore, stand-off annotations or other derivates are the only format in which European researchers (like myself) are allowed to re-distribute the respective corpora. In order to make the full corpora available to the public despite such restrictions, the stand-off format presented here allows anybody to locally reconstruct the full corpora with the least possible computational effort.","Freie Universität Berlin, Germany","nlp/corpus-construction, legal/copyright",,,{CommonCOW},,
|
commoncrawl_citations_annotated_2017.csv
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited
|
2 |
+
"Roland Schäfer – Freie Universität Berlin, Germany",Accurate and Efficient General-purpose Boilerplate Detection for Crawled Web Corpora,https://doi.org/10.1007/s10579-016-9359-2,papers,20170101Z00:00:00,"Boilerplate, Corpus construction, Non-destructive corpus normalization, Web corpora","Removal of boilerplate is one of the essential tasks in web corpus construction and web indexing. Boilerplate (redundant and automatically inserted material like menus, copyright notices, navigational elements, etc.) is usually considered to be linguistically unattractive for inclusion in a web corpus. Also, search engines should not index such material because it can lead to spurious results for search terms if these terms appear in boilerplate regions of the web page. The size of large web corpora necessitates the use of efficient algorithms while a high accuracy directly improves the quality of the final corpus. In this paper, I present and evaluate a supervised machine learning approach to general-purpose boilerplate detection for languages based on Latin alphabets which is both very efficient and very accurate. Using a Multilayer Perceptron and a high number of carefully engineered features, I achieve between 95\% and 99\% correct classifications (depending on the input language) with precision and recall over 0.95. Since the perceptrons are trained on language-specific data, I also evaluate how well perceptrons trained on one language perform on other languages. The single features are also evaluated for the merit they contribute to the classification. I show that the accuracy of the Multilayer Perceptron is on a par with that of other classifiers such as Support Vector Machines. I conclude that the quality of general-purpose boilerplate detectors depends mainly on the availability of many well-engineered features and which are highly language-independent. The method has been implemented in the open-source texrex web page cleaning software, and large corpora constructed using it are available from the COW initiative, including the CommonCOW corpora created from CommonCrawl data sets.","Freie Universität Berlin, Germany","nlp/boilerplate-removal, nlp/web-as-corpus, nlp/corpus-construction",,,,,
|
3 |
+
"Daniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkova, Jan Hajic jr., Jaroslava Hlavacova, Václava Kettnerová, Zdenka Uresova, Jenna Kanerva, Stina Ojala, Anna Missilä, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria dePaiva, Kira Droganova, Héctor Martínez Alonso, Çağrı Çöltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadová, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonca, Tatiana Lando, Rattima Nitisaroj, Josie Li – Charles University, Czech Republic; Uppsala University, Sweden; University of Turku, Finland; University of Cambridge; Google; Bauhaus-Universität Weimar, Germany; UiT The Arctic University of Norway; University of the Basque Country, Spain; Istanbul Technical University, Turkey; Stanford University; New York University Abu Dhabi; City University of Hong Kong; Ohio State University, USA; University of Turin, Italy; University of Pisa, Italy; IBM Research; Nuance Communications; INRIA – Paris 7, France; University of Tübingen, Germany; DFKI, Germany; text & form, Germany",CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies,http://www.aclweb.org/anthology/K/K17/K17-3001.pdf,papers,20170101Z00:00:00,,"The Conference on Computational Natural Language Learning (CoNLL) features a shared task, in which participants train and test their learning systems on the same data sets. In 2017, the task was devoted to learning dependency parsers for a large number of languages, in a real-world setting without any gold-standard annotation on input. All test sets followed a unified annotation scheme, namely that of Universal Dependencies. In this paper, we define the task and evaluation methodology, describe how the data sets were prepared, report and analyze the main results, and provide a brief categorization of the different approaches of the participating systems.","Charles University, Czech Republic; Uppsala University, Sweden; University of Turku, Finland; University of Cambridge; Google; Bauhaus-Universität Weimar, Germany; UiT The Arctic University of Norway; University of the Basque Country, Spain; Istanbul Technical University, Turkey; Stanford University; New York University Abu Dhabi; City University of Hong Kong; Ohio State University, USA; University of Turin, Italy; University of Pisa, Italy; IBM Research; Nuance Communications; INRIA – Paris 7, France; University of Tübingen, Germany; DFKI, Germany; text & form, Germany","nlp/dependency-parsing, nlp/dependency-treebank, nlp/corpus-construction","The supporting raw data was gathered from CommonCrawl, which is a publicly available web crawl created and maintained by the non-profit CommonCrawl foundation.² The data is publicly available in the Amazon cloud both as raw HTML and as plain text. It is collected from a number of independent crawls from 2008 to 2017, and totals petabytes in size. We used cld2³ as the language detection engine because of its speed, available Python bindings and large coverage of languages. Language detection was carried out on the first 1024 bytes of each plaintext document. Deduplication was carried out using hashed document URLs, a simple strategy found in our tests to be effective for coarse duplicate removal. The data for each language was capped at 100,000 tokens per a single input file.",,conll-2017-shared-task,,
|
4 |
+
"Abu Bakr Soliman, Kareem Eissa, Samhaa El-Beltagy – Nile University, Egypt",AraVec: A set of Arabic Word Embedding Models for use in Arabic NLP,https://www.researchgate.net/publication/319880027_AraVec_A_set_of_Arabic_Word_Embedding_Models_for_use_in_Arabic_NLP,papers,20170101Z00:00:00,,,"Nile University, Egypt",nlp/word-embeddings,"we have used a subset of the January 2017 crawl dump. The dump contains more than 3.14 billion web pages and about 250 Terabytes of uncompressed content. [...] We used WET files as we were only interested in plain text for building the distributed word representation models. Due to the size of the dump, which requires massive processing power and time for handling, we only used 30\% of the data contained in it. As this subset comprises about one billion web pages (written in multiple language), we believed that it was large enough to provide sufficient Arabic Web pages from which we can build a representative word embeddings model. Here it is important to note that the Common Crawl project does not provide any technique for identifying or selecting the language of web pages to download. So, we had to download data first, and then discard pages that were not written in Arabic. The Arabic detection phase was performed using some regex commands and some NLP techniques to distinguish Arabic from other languages. After the completion of this phase we succeeded in obtaining 4,379,697 Arabic web pages which were then segmented into more than 180,000,000 paragraphs/documents for building our models.",,,,
|
5 |
+
"Tommy Dean, Ali Pasha, Brian Clarke, Casey J. Butenhoff – Virginia Polytechnic Institute and State University, USA; Eastman Chemical Company; USA",Common Crawl Mining,http://hdl.handle.net/10919/77629,papers,20170101Z00:00:00,,,"Virginia Polytechnic Institute and State University, USA; Eastman Chemical Company; USA","information retrieval, market research, business intelligence",The main goal behind the Common Crawl Mining system is to improve Eastman Chemical Company’s ability to use timely knowledge of public concerns to inform key business decisions. It provides information to Eastman Chemical Company that is valuable for consumer chemical product marketing and strategy development. Eastman desired a system that provides insight into the current chemical landscape. Information about trends and sentiment towards chemicals over time is beneficial to their marketing and strategy departments. They wanted to be able to drill down to a particular time period and look at what people were writing about certain keywords. [...] The final Common Crawl Mining system is a search engine implemented using Elasticsearch. Relevant records are identified by first analyzing Common Crawl for Web Archive (WARC) files that have a high frequency of records from interesting domains.,,,,
|
6 |
+
"Yuheng Du, Alexander Herzog, Andre Luckow, Ramu Nerella, Christopher Gropp, Amy Apon – Clemson University, USA",Representativeness of latent dirichlet allocation topics estimated from data samples with application to common crawl,http://alexherzog.net/files/IEEE_BigData_2017_Representativeness_of_LDA.pdf,papers,20170101Z00:00:00,,,"Clemson University, USA","nlp/topic-modeling, nlp/corpus-representativeness","Common Crawl is a massive multi-petabyte dataset hosted by Amazon. It contains archived HTML web page data from 2008 to date. Common Crawl has been widely used for text mining purposes. Using data extracted from Common Crawl has several advantages over a direct crawl of web data, among which is removing the likelihood of a user’s home IP address becoming blacklisted for accessing a given web site too frequently. However, Common Crawl is a data sample, and so questions arise about the quality of Common Crawl as a representative sample of the original data. We perform systematic tests on the similarity of topics estimated from Common Crawl compared to topics estimated from the full data of online forums. Our target is online discussions from a user forum for automotive enthusiasts, but our research strategy can be applied to other domains and samples to evaluate the representativeness of topic models. We show that topic proportions estimated from Common Crawl are not significantly different than those estimated on the full data. We also show that topics are similar in terms of their word compositions, and not worse than topic similarity estimated under true random sampling, which we simulate through a series of experiments. Our research will be of interest to analysts who wish to use Common Crawl to study topics of interest in user forum data, and analysts applying topic models to other data samples.",,,,
|
7 |
+
"Shalini Ghosh, Phillip Porras, Vinod Yegneswaran, Ken Nitz, Ariyam Das – CSL, SRI International, Menlo Park",ATOL: A Framework for Automated Analysis and Categorization of the Darkweb Ecosystem,https://www.aaai.org/ocs/index.php/WS/AAAIW17/paper/download/15205/14661,papers,20170101Z00:00:00,,,"CSL, SRI International, Menlo Park","web-science, information retrieval, nlp/text-classification",".onion references from [...] and an open repository of (non-onion) Web crawling data, called Common Crawl (Common Crawl Foundation 2016).",,,,
|
8 |
+
"Filip Ginter, Jan Hajič, Juhani Luotolahti, Milan Straka, Daniel Zeman – Charles University, Czech Republic; University of Turku, Finland",CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings,http://hdl.handle.net/11234/1-1989,papers,20170101Z00:00:00,,,"Charles University, Czech Republic; University of Turku, Finland","nlp/corpus-construction, nlp/word-embeddings, nlp/syntactic-annotations, nlp/dependency-parsing","Automatic segmentation, tokenization and morphological and syntactic annotations of raw texts in 45 languages, generated by UDPipe (http://ufal.mff.cuni.cz/udpipe), together with word embeddings of dimension 100 computed from lowercased texts by word2vec (https://code.google.com/archive/p/word2vec/). [...] Note that the CC BY-SA-NC 4.0 license applies to the automatically generated annotations and word embeddings, not to the underlying data, which may have different license and impose additional restrictions.",,conll-2017-shared-task,,
|
9 |
+
"Jakub Kúdela, Irena Holubová, Ondřej Bojar – Charles University, Czech Republic",Extracting Parallel Paragraphs from Common Crawl,https://ufal.mff.cuni.cz/pbml/107/art-kudela-holubova-bojar.pdf,papers,20170101Z00:00:00,,"Most of the current methods for mining parallel texts from the web assume that web pages of web sites share same structure across languages. We believe that there still exists a non-negligible amount of parallel data spread across sources not satisfying this assumption. We propose an approach based on a combination of bivec (a bilingual extension of word2vec) and locality-sensitive hashing which allows us to efficiently identify pairs of parallel segments located anywhere on pages of a given web domain, regardless their structure. We validate our method on realigning segments from a large parallel corpus. Another experiment with real-world data provided by Common Crawl Foundation confirms that our solution scales to hundreds of terabytes large set of web-crawled data.","Charles University, Czech Republic","nlp/machine-translation, nlp/corpus-construction",,,,,
|
10 |
+
"Amir Mehmood, Hafiz Muhammad Shafiq, Abdul Waheed – UET, Lahore, Pakistan",Understanding Regional Context of World Wide Web using Common Crawl Corpus,https://www.researchgate.net/publication/321489200_Understanding_Regional_Context_of_World_Wide_Web_using_Common_Crawl_Corpus,papers,20170101Z00:00:00,,,"UET, Lahore, Pakistan","web-science, webometrics",,CC-MAIN-2016-50,,,
|
11 |
+
"Alexander Panchenko, Eugen Ruppert, Stefano Faralli, Simone Paolo Ponzetto, Chris Biemann – University of Hamburg, Germany; University of Mannheim, Germany",Building a Web-Scale Dependency-Parsed Corpus from CommonCrawl,http://arxiv.org/abs/1710.01779,papers,20170101Z00:00:00,,,"University of Hamburg, Germany; University of Mannheim, Germany","nlp/dependency-parsing, nlp/corpus-construction",,CC-MAIN-2016-07,depcc,,
|
12 |
+
"Ajinkya Kale, Thrivikrama Taula, Sanjika Hewavitharana, Amit Srivastava – eBay Inc.",Towards semantic query segmentation,https://arxiv.org/abs/1707.07835,papers,20170101Z00:00:00,,,eBay Inc.,"ir/query-segmentation, nlp/word-embeddings, patent",,,,,GloVe-word-embeddings
|
13 |
+
"Kjetil Bugge Kristoffersen – University of Oslo, Norway",Common crawled web corpora: constructing corpora from large amounts of web data,http://urn.nb.no/URN:NBN:no-60569,papers,20170101Z00:00:00,,"Efforts to use web data as corpora seek to provide solutions to problems traditional corpora suffer from, by taking advantage of the web's huge size and diverse type of content. This thesis will discuss the several sub-tasks that make up the web corpus construction process, like HTML markup removal, language identification, boilerplate removal, duplication detection, etc. Additionally, by using data provided by the Common Crawl Foundation, I develop a new very large English corpus with more than 135 billion tokens. Finally, I evaluate the corpus by training word embeddings and show that the trained model largely outperforms models trained on other corpora in a word analogy and word similarity task.","University of Oslo, Norway","nlp/corpus-construction, nlp/web-as-corpus",,,,,
|
14 |
+
"David Stuart – University of Wolverhampton, Wolverhampton, UK",Open bibliometrics and undiscovered public knowledge,https://doi.org/10.1108/OIR-07-2017-0209,papers,20170101Z00:00:00,,,"University of Wolverhampton, Wolverhampton, UK",web-science/webometrics,"Whether altmetrics is really any more open than traditional citation analysis is a matter of debate, although services such as Common Crawl (http://commoncrawl.org), an open repository of web crawl data, provides the opportunity for more open webometrics, [...]",,,,
|
commoncrawl_citations_annotated_2018.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
commoncrawl_citations_annotated_2019.csv
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited
|
2 |
+
"Nils Brügger, Ian Milligan – Aarhus University, Denmark; University of Waterloo, Canada",The SAGE Handbook of Web History,https://us.sagepub.com/en-us/nam/the-sage-handbook-of-web-history/book252251,papers,20190101Z00:00:00,,,"Aarhus University, Denmark; University of Waterloo, Canada","web-science, web history",,,,,
|
3 |
+
"Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, Edouard Grave – Facebook AI",CCNet: Extracting high quality monolingual datasets from web crawl data,https://arxiv.org/abs/1911.00359,papers,20190101Z00:00:00,,"Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.",Facebook AI,"nlp/corpus-construction, nlp/web-as-corpus, nlp/low-resource-language","[about https://github.com/facebookresearch/cc_net] In this paper, we present a data collection pipeline that allows to gather massive monolingual corpora of high quality in a variety of languages, including many low-resource ones. The principles of our pipeline are general and we show the results of its application to data collected by the Common Crawl project.¹ Common Crawl is a massive non-curated dataset of webpages in many languages, mixed together in temporal snapshots of the web.",,CCNet,,
|
4 |
+
"A. Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, Ilya Sutskever – OpenAI, San Francisco, California, United States",Language models are unsupervised multitask learners,https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe,papers,20190101Z00:00:00,,,"OpenAI, San Francisco, California, United States",cc-cited-not-used,"A promising source of diverse and nearly unlimited text is web scrapes such as Common Crawl. While these archives are many orders of magnitude larger than current language modeling datasets, they have significant data quality issues. Trinh & Le (2018) used Common Crawl in their work on commonsense reasoning but noted a large amount of documents “whose content are mostly unintelligible”. We ob-served similar data issues in our initial experiments with Common Crawl. Trinh & Le (2018)’s best results were achieved using a small subsample of Common Crawl which included only documents most similar to their target dataset,the Winograd Schema Challenge. While this is a pragmatic approach to improve performance on a specific task, we want to avoid making assumptions about the tasks to be performed ahead of time.Instead, we created a new web scrape which emphasizes document quality. To do this we only scraped web pages which have been curated/filtered by humans. Manually filtering a full web scrape would be exceptionally expensive so as a starting point, we scraped all outbound links from Reddit, a social media platform, which received at least 3 karma. This can be thought of as a heuristic indicator for whether other users found the link interesting, educational, or just funny. The resulting dataset, WebText, contains the text subsetof these 45 million links.",,,,
|
5 |
+
"Pedro Javier Ortiz Suárez, Benoît Sagot, Laurent Romary – Inria, Paris, France; Sorbonne University, Paris, France",Asynchronous pipeline for processing huge corpora on medium to low resource infrastructures,https://hal.inria.fr/hal-02148693,papers,20190101Z00:00:00,,,"Inria, Paris, France; Sorbonne University, Paris, France",nlp/corpus-construction,"We use the November 2018 snapshot which surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header. From now on, when we mention the “Common Crawl” corpus, we refer to this particular November 2018 snapshot.",CC-MAIN-2018-47 (WET),OSCAR,,
|
6 |
+
"Dominik Mottl – Hochschule Darmstadt, Germany",Multi-Label Branchenklassifikation von Web-Texten,https://fbmn.h-da.de/uploads/Themen/WS18_thesis_mottl.pdf,papers,20190101Z00:00:00,,,"Hochschule Darmstadt, Germany","nlp/NER, entity-linking",NER of company names and linking to DBpedia performed on English texts in 712 WET files of November 2018 crawl (CC-MAIN-2018-47) using cc-pyspark.,,,,
|
7 |
+
"Sebastian Nagel – Common Crawl, USA",Accessing WARC files via SQL,https://digital.library.unt.edu/ark:/67531/metadc1608961/,papers,20190101Z00:00:00,,,"Common Crawl, USA","web-archiving, SQL, Parquet",,cc-index-table,,,
|
8 |
+
"Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov – Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA; Facebook AI",RoBERTa: A Robustly Optimized BERT Pretraining Approach,https://arxiv.org/abs/1907.11692,papers,20190101Z00:00:00,,,"Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA; Facebook AI","nlp/corpus-construction, nlp/language-model","We find that BERT was significantly undertrained and propose an improved recipe for training BERT models, which we call RoBERTa, that can match or exceed the performance of all of the post-BERT methods. Our modifications are simple, they include: (1) training the model longer, with bigger batches, over more data; (2) removing the next sentence prediction objective; (3) training on longer sequences; and (4) dynamically changing the masking pattern applied to the training data. We also collect a large new dataset (CC-NEWS) of comparable size to other privately used datasets, to better control for training set size effects. [...] CC-NEWS, which we collected from the English portion of the CommonCrawl News dataset (Nagel, 2016). The data contains 63 million English news articles crawled between September 2016 and February 2019. (76GB after filtering).⁴ [⁴ We use news-please (Hamborg et al.,2017) to collect and extract CC-NEWS. CC-NEWS is similar to the REALNEWS dataset described in Zellers et al. (2019).]",CC-NEWS,CC-NEWS-RoBERTa,,
|
9 |
+
"Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi – University of Washington, USA; Allen Institute for Artificial Intelligence, USA",Defending against neural fake news,http://papers.nips.cc/paper/9106-defending-against-neural-fake-news.pdf,papers,20190101Z00:00:00,,,"University of Washington, USA; Allen Institute for Artificial Intelligence, USA","nlp/language-model, nlp/fake-news-detection, nlp/text-classification, misinformation, disinformation","Dataset. We present RealNews, a large corpus of news articles from Common Crawl. Training Grover requires a large corpus of news articles with metadata, but none currently exists. Thus, we construct one by scraping dumps from Common Crawl, limiting ourselves to the 5000 news domains indexed by Google News. We used the Newspaper Python library to extract the body and meta-data from each article. News from Common Crawl dumps from December 2016 through March 2019 were used as training data; articles published in April 2019 from the April 2019 dump were used for evaluation. After deduplication, RealNews is 120 gigabytes without compression. [...] Obtaining the data required through Common Crawl cost \$10k in AWS credits and can be massively parallelized over many CPUs. [...]",,Grover-RealNews,,
|
10 |
+
"Giulio Ermanno Pibiri, Matthias Petri, Alistair Moffat – University of Melbourne, Australia; University of Pisa, Italy; ISTI-CNR, Pisa, Italy",Fast Dictionary-Based Compression for Inverted Indexes,https://dl.acm.org/citation.cfm?id=3290962,papers,20190101Z00:00:00,,"Dictionary-based compression schemes provide fast decoding operation, typically at the expense of reduced compression effectiveness compared to statistical or probability-based approaches. In this work, we apply dictionary-based techniques to the compression of inverted lists, showing that the high degree of regularity that these integer sequences exhibit is a good match for certain types of dictionary methods, and that an important new trade-off balance between compression effectiveness and compression efficiency can be achieved. Our observations are supported by experiments using the document-level inverted index data for two large text collections, and a wide range of other index compression implementations as reference points. Those experiments demonstrate that the gap between efficiency and effectiveness can be substantially narrowed.","University of Melbourne, Australia; University of Pisa, Italy; ISTI-CNR, Pisa, Italy","information-retrieval/search-engine, information-retrieval/inverted-index","We use the standard Gov2 collection containing 426 GiB of text; and CCNEWS, an English subset of the freely available NEWS subset of the CommonCrawl¹ [¹http://commoncrawl.org/2016/10/news-dataset-available/], consisting of news articles in the period 09/01/16 to 30/03/18, following the methodology of Petri and Moffat [28].",CC-NEWS,,,
|
11 |
+
"Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin – Facebook AI",CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB,https://arxiv.org/abs/1911.04944,papers,20190101Z00:00:00,"Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences","We show that margin-based bitext mining in a multilingual sentence space can be applied to monolingual corpora of billions of sentences. We are using ten snapshots of a curated common crawl corpus (Wenzek et al., 2019), totalling 32.7 billion unique sentences. Using one unified approach for 38 languages, we were able to mine 4.5 billions parallel sentences, out of which 661 million are aligned with English. 20 language pairs have more then 30 million parallel sentences, 112 more then 10 million, and most more than one million, including direct alignments between many European or Asian languages.",Facebook AI,"nlp/corpus-construction, nlp/parallel-corpus, nlp/machine-translation","The curated Common Crawl corpus¶ In this work, we propose to mine parallel sentences from the Web, by using the data released by the Common Crawl project.[⁵https://commoncrawl.org/] Each month, a snapshot of the Web containing terabytes of web pages in various languages is obtained by randomly exploring URLs. We start by applying some preprocessing steps to the raw text data, following the pipeline introduced by Wenzek et al. (2019) and leading to the CCNet dataset. The first step is to deduplicate the data at the paragraph level, as the original crawls contain up to 70% of duplicated data. This preprocessing removes low quality content, such as boilerplate, navigation menus or cookie warnings. The second step of the pipeline is to identify the language of each document, using fastText6 (Grave et al., 2018). This language identifier uses a linear classifier with character n-gram features, and can recognize up to 176 languages. Finally, the last step of the preprocessing is to filter low quality content by training a language model on Wikipedia, and only keeping documents with a low perplexity score. We refer the reader to Wenzek et al. (2019) for more details about this pre- processing pipeline. In Figure 1, we report the number of unique sentences obtained after preprocessing ten snapshots from Common Crawl. We currently process 38 languages. The English Web content is abundant and we used only one snapshot.",,CCMatrix,,
|
12 |
+
"Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc'Aurelio Ranzato, Arthur Szlam – Facebook AI Research; Harvard University, USA",Real or Fake? Learning to Discriminate Machine from Human Generated Text,https://arxiv.org/abs/1906.03351,papers,20190101Z00:00:00,,,"Facebook AI Research; Harvard University, USA",nlp/text-classification,"CCNews: We collect a de-duplicated subset of the English portion of the CommonCrawl news dataset (Nagel, 2016) [Sebastian Nagel. Cc-news. http://web.archive.org/save/http://commoncrawl. org/2016/10/news-dataset-available/, 2016.], which totals around 16 Billion words.",CC-NEWS,"CCNews (Bakhtin, et al. 2019)",,
|
13 |
+
"Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le – Carnegie Mellon University, Google AI Brain Team",XLNet: Generalized Autoregressive Pretraining for Language Understanding,https://arxiv.org/abs/1906.08237,papers,20190101Z00:00:00,,,"Carnegie Mellon University, Google AI Brain Team",nlp/transformer-language-model,"Following BERT [10], we use the BooksCorpus [40] and English Wikipedia as part of our pretraining data, which have 13GB plain text combined. In addition, we include Giga5 (16GB text) [26], ClueWeb 2012-B (extended from 5]), and Common Crawl [6] for pretraining. We use heuristics to aggressively filter out short or low-quality articles for ClueWeb 2012-B and Common Crawl, which results in 19GB and 110GB text respectively.",,,,
|
14 |
+
"Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov – Facebook AI",Unsupervised Cross-lingual Representation Learning at Scale,https://arxiv.org/abs/1911.02116,papers,20190101Z00:00:00,"Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences","This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross- lingual transfer tasks. We train a Transformer- based masked language model on one hundred languages, using more than two terabytes of fil- tered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6\% average accu- racy on XNLI, +13\% average F1 score on MLQA, and +2.4\% F1 score on NER. XLM-R performs particularly well on low-resource lan- guages, improving 15.7\% in XNLI accuracy for Swahili and 11.4\% for Urdu over previ- ous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and ca- pacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per- language performance; XLM-R is very compet- itive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code, data and models publicly available.",Facebook AI,"nlp/corpus-construction, nlp/web-as-corpus, nlp/language-model","Following Wenzek et al. (2019) 2, we build a clean CommonCrawl Corpus in 100 languages. [...] In this work, we introduced XLM-R, our new state of the art multilingual masked language model trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages.",,CC-100,CCNet,
|
commoncrawl_citations_annotated_2020.csv
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited
|
2 |
+
"Joel Mackenzie, Rodger Benham, Matthias Petri, Johanne R. Trippas, J. Shane Culpepper, Alistair Moffat – The University of Melbourne, Melbourne, Australia; RMIT University, Melbourne, Australia; Amazon Alexa, Manhattan Beach, CA, USA",CC-News-En: A large English news corpus,https://doi.org/10.1145/3340531.3412762,papers,20200101Z00:00:00,"corpus, user query variations, collection, news search, crowdsourcing","We describe a static, open-access news corpus using data from the Common Crawl Foundation, who provide free, publicly available web archives, including a continuous crawl of international news articles published in multiple languages. Our derived corpus, CC-News-En, contains 44 million English documents collected between September 2016 and March 2018. The collection is comparable in size with the number of documents typically found in a single shard of a large-scale, distributed search engine, and is four times larger than the news collections previously used in offline information retrieval experiments. To complement the corpus, 173 topics were curated using titles from Reddit threads, forming a temporally representative sampling of relevant news topics over the 583 day collection window. Information needs were then generated using automatic summarization tools to produce textual and audio representations, and used to elicit query variations from crowdworkers, with a total of 10,437 queries collected against the 173 topics. Of these, 10,089 include key-stroke level instrumentation that captures the timings of character insertions and deletions made by the workers while typing their queries. These new resources support a wide variety of experiments, including large-scale efficiency exercises and query auto-completion synthesis, with scope for future addition of relevance judgments to support offline effectiveness experiments and hence batch evaluation campaigns.","The University of Melbourne, Melbourne, Australia; RMIT University, Melbourne, Australia; Amazon Alexa, Manhattan Beach, CA, USA","nlp/text-corpora, nlp/corpus-construction, ir/information-extraction","Our derived corpus, CC-News-En, contains 44 million English documents collected between September 2016 and March 2018. [...] One such example is the CommonCrawl Foundation,[¹ ] who generate large-scale crawls of the web at regular intervals. A key philosophy behind the Common Crawlis to democratize data, allowing open access with no fees. In late 2016, the Common Crawl Foundation announced a news-specific crawl (CC-News), [² ] with documents being added on a daily basis, and covering sources from a wide range of countries and languages. Here we derive a static, English segment of the CC-Newscrawl that we refer to as CC-News-En. Due to the storage and computation costs involved in filtering out non-English documents, we make the complete corpus available as a free resource, along with asuite of tools which can be used to replicate corpus extraction from the original source CC-News data. We also provide a set of 10,437 user query variations over 173 query topics, including keystroke-level data collected from a novel crowdworking experiment. Our goal is to encourage reproducible and replicable experimentation, with greatly reduced barriers to entry. [...] A total of 2,291 CC-News WARC files were processed to build CC-News-En, covering the period 26 August 2016 to 31 March 2018, inclusive. The first and last WARC files inthis collection are as follows: •CC-NEWS-20160826124520-00000.warc.gz •CC-NEWS-20180331191315-00143.warc.gz The resulting subset of compressed WARC files occupies 2.14 TiB of disk space, and contains a total of 102.5 million documents in over 100 languages. [...] Missing Documents and Temporal Gaps. During the creation of the collection, the CC-NEWS-20170812163812-00038.warc.gz file was not processed correctly by our pipeline, and was subsequently dropped from the CC-News-En corpus. In addition, there are six days within the 583 day period where no WARC files were added to the original CC-News crawl: 22/09/2016 – 25/09/2016 inclusive, 18/12/2017, and 22/12/2017. These gaps typically correspond to hardware and software upgrades on the crawl servers.[¹⁸ Private correspondence with Common Crawl Engineers.] It is also important to note that both CC-News and CC-News-En are not intended to be complete crawls of their sources, but rather, to provide a reproducible sample of these sites.",CC-NEWS,,,
|
3 |
+
"Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzmán, Philipp Koehn – Facebook AI; Johns Hopkins University",CCAligned: A Massive collection of cross-lingual web-document pairs,https://www.aclweb.org/anthology/2020.emnlp-main.480,papers,20200101Z00:00:00,,"Cross-lingual document alignment aims to identify pairs of documents in two distinct languages that are of comparable content or translations of each other. In this paper, we exploit the signals embedded in URLs to label web documents at scale with an average precision of 94.5{\%} across different language pairs. We mine sixty-eight snapshots of the Common Crawl corpus and identify web document pairs that are translations of each other. We release a new web dataset consisting of over 392 million URL pairs from Common Crawl covering documents in 8144 language pairs of which 137 pairs include English. In addition to curating this massive dataset, we introduce baseline methods that leverage cross-lingual representations to identify aligned documents based on their textual content. Finally, we demonstrate the value of this parallel documents dataset through a downstream task of mining parallel sentences and measuring the quality of machine translations from models trained on this mined data. Our objective in releasing this dataset is to foster new research in cross-lingual NLP across a variety of low, medium, and high-resource languages.",Facebook AI; Johns Hopkins University,"nlp/machine-translation, nlp/text-corpora, nlp/parallel-corpus, nlp/cross-lingual-document-alignment","[...] we exploit the signals embedded in URLs to label web documents at scale with an average precision of 94.5{\%} across different language pairs. We mine sixty-eight snapshots of the Common Crawl corpus and identify web document pairs that are translations of each other. We release a new web dataset consisting of over 392 million URL pairs from Common Crawl covering documents in 8144 language pairs of which 137 pairs include English. [...] Starting from 68 Common Crawl snapshots with a raw document count of 169.4 billion documents, upon deduplication, the resultant corpus is approximately 29.6 billion web documents from 107.8 million distinct web domains – a 83{\%} reduction from the raw corpus.",,CCAligned-2020,,
|
4 |
+
"Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei – Johns Hopkins University; OpenAI",Language models are few-shot learners,https://arxiv.org/abs/2005.14165,papers,20200101Z00:00:00,,,Johns Hopkins University; OpenAI,"nlp/language-model, ai/deep-learning, nlp/autoregressive-transformer-language-model, nlp/question-answering, nlp/machine-translation, nlp/text-generation","Datasets for language models have rapidly expanded, culminating in the Common Crawl dataset [...] constituting nearly a trillion words. [...] However, we have found that unfiltered or lightly filtered versions of Common Crawl tend to have lower quality than more curated datasets. Therefore, we took 3 steps to improve the average quality of our datasets: (1) we downloaded and filtered a version of CommonCrawl based on similarity to a range of high-quality reference corpora, (2) we performed fuzzy deduplication at the document level, within and across datasets, to prevent redundancy and preserve the integrity of our held-out validation set as an accurate measure of overfitting, and (3) we also added known high-quality reference corpora to the training mix to augment CommonCrawl and increase its diversity. Details of the first two points (processing of Common Crawl) are described in Appendix A.",,,,
|
5 |
+
"Metod Jazbec, Barna Pásztor, Felix Faltings, Nino Antulov-Fantulin, Petter N. Kolm – ETH Zurich, Switzerland; New York University, New York, USA",On the impact of publicly available news and information transfer to financial markets,https://arxiv.org/abs/2010.12002,papers,20200101Z00:00:00,,"We quantify the propagation and absorption of large-scale publicly available news articles from the World Wide Web to financial markets. To extract publicly available information, we use the news archives from the Common Crawl, a nonprofit organization that crawls a large part of the web. We develop a processing pipeline to identify news articles associated with the constituent companies in the S&P 500 index, an equity market index that measures the stock performance of U.S. companies. Using machine learning techniques, we extract sentiment scores from the Common Crawl News data and employ tools from information theory to quantify the information transfer from public news articles to the U.S. stock market. Furthermore, we analyze and quantify the economic significance of the news-based information with a simple sentiment-based portfolio trading strategy. Our findings provides support for that information in publicly available news on the World Wide Web has a statistically and economically significant impact on events in financial markets.","ETH Zurich, Switzerland; New York University, New York, USA","statistical-finance, ai/machine-learning, nlp/sentiment-analysis","In this article, we use news articles from the Common Crawl News, a subset of the Common Crawl’s petabytes of publicly available World Wide Web archives, to measure the impact of the arrival of new information about the constituent stocks in the S&P 500 index at the time of publishing. To the best of our knowledge, our study is the first one to use the Common Crawl in this way. We develop a cloud-based processing pipeline that identifies news articles in the Common Crawl News data that are related to the companies in the S&P 500. As the Common Crawl public data archives are getting bigger, they are opening doors for many real-world “data-hungry” applications such as transformers models GPT49 and BERT50, a recent class of deep learning language models. We believe that public sources of news data is important not only for natural language processing (NLP) and finance communities but also for more general studies in complex systems and computational social sciences that are aiming to characterize (mis)information propagation and dynamics in techno-socio-economic systems. The abundance of high-frequency data around the financial systems enables complex systems researchers to have microscopic observables that allow verification of different models, theories, and hypotheses.",CC-NEWS,,,
|
6 |
+
"Marco Squarcina, Mauro Tempesta, Lorenzo Veronese, Stefano Calzavara, Matteo Maffei – TU Wien, Austria; Università Ca’ Foscari Venezia, Italy",Can I take your subdomain? Exploring related-domain attacks in the modern web,https://arxiv.org/abs/2012.01946,papers,20200101Z00:00:00,,,"TU Wien, Austria; Università Ca’ Foscari Venezia, Italy","computer-security/internet-security, related-domain attacks","Our web security analysis aims at quantifying the number of domains hosting web applications that can be exploited by taking over the vulnerable domains discovered by RDScan. In particular, for every apex domain with at least one vulnerable subdomain, we selected from the CommonCrawl dataset [¹⁹ Common Crawl. Host- and domain-level webgraphs feb/mar/may 2020. https://commoncrawl.org/2020/06/host-and-domain-level-web-graphs-febmarmay-2020/, 2020.] the list of 200 most popular related-domains according to the Pagerank score [11]. From the homepage of these domains,we extracted the same-origin links that appear in the HTML code.",hyperlinkgraph/cc-main-2020-feb-mar-may/hostgraph,,,
|
7 |
+
"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu – Google, Mountain View, CA, USA",Exploring the limits of transfer learning with a unified text-to-text transformer,http://jmlr.org/papers/v21/20-074.html,papers,20200101Z00:00:00,,"Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.","Google, Mountain View, CA, USA","nlp/corpus-construction, nlp/language-model","We also introduce our approach for treating every problem as a text-to-text task and describe our “Colossal Clean Crawled Corpus” (C4), the Common Crawl-based data set we created as a source of unlabeled text data. [...] Common Crawl is a publicly-available web archive that provides “web extracted text” by removing markup and other non-text content from the scraped HTML files. This process produces around 20TB of scraped text data each month. Unfortunately, the majority of the resulting text is not natural language. Instead, it largely comprises gibberish or boiler-plate text like menus, error messages, or duplicate text. Furthermore, a good deal of the scraped text contains content that is unlikely to be helpful for any of the tasks we consider (offensive language, placeholder text, source code, etc.). To address these issues, we used the following heuristics for cleaning up Common Crawl’s web extracted text: [...] To assemble our base data set, we downloaded the web extracted text from April 2019 and applied the aforementioned filtering. This produces a collection of text that is not only orders of magnitude larger than most data sets used for pre-training (about 750 GB) but also comprises reasonably clean and natural English text. We dub this data set the “Colossal Clean Crawled Corpus” (or C4 for short) and release it as part of TensorFlow Datasets.⁸ [⁸https://www.tensorflow.org/datasets/catalog/c4]",CC-MAIN-2019-18 (WET),Tensorflow-C4,,
|
8 |
+
"Jay M. Patel – Specrom Analytics, Ahmedabad, India",Getting structured data from the internet,https://www.apress.com/gp/book/9781484265758,papers,20200101Z00:00:00,,,"Specrom Analytics, Ahmedabad, India",web-mining,[Chapter 6: Introduction to Common Crawl Datasets + Chapter 7: Web Crawl Processing on Big Data Scale],,,,
|
9 |
+
"Jonathan Dunn – University of Canterbury, Christchurch, New Zealand",Mapping languages: The Corpus of Global Language Use,https://doi.org/10.1007/s10579-020-09489-2,papers,20200101Z00:00:00,,"This paper describes a web-based corpus of global language use with a focus on how this corpus can be used for data-driven language mapping. First, the corpus provides a representation of where national varieties of major languages are used (e.g., English, Arabic, Russian) together with consistently collected data for each variety. Second, the paper evaluates a language identification model that supports more local languages with smaller sample sizes than alternative off-the-shelf models. Improved language identification is essential for moving beyond majority languages. Given the focus on language mapping, the paper analyzes how well this digital language data represents actual populations by (i) systematically comparing the corpus with demographic ground-truth data and (ii) triangulating the corpus with an alternate Twitter-based dataset. In total, the corpus contains 423 billion words representing 148 languages (with over 1 million words from each language) and 158 countries (again with over 1 million words from each country), all distilled from Common Crawl web data. The main contribution of this paper, in addition to describing this publicly-available corpus, is to provide a comprehensive analysis of the relationship between two sources of digital data (the web and Twitter) as well as their connection to underlying populations.","University of Canterbury, Christchurch, New Zealand","nlp/corpus-construction, nlp/language-identification","The raw portions of the Common Crawl dataset used to build the corpus are shown in Table 2. The corpus uses every portion of the crawl from March 2014 to June 2019, totaling 147 billion web pages in total. No temporal divisions are included in the corpus because these dates represent the time of collection rather than the time of production: web data does not expire and there is a long-tail in which the same samples are observed multiple times across different periods.",64 monthly crawls: March 2014 (CC-MAIN-2014-10) -- June 2019 (CC-MAIN-2019-29) (WET),earthlings.io/CGLU,,
|
10 |
+
"Liang Xu, Xuanwei Zhang, Qianqian Dong – CLUE Organization",CLUECorpus2020: A large-scale Chinese corpus for pre-training language model,https://arxiv.org/abs/2003.01355,papers,20200101Z00:00:00,,,CLUE Organization,nlp/corpus-construction,"we introduce the Chinese corpusfrom CLUE organization, CLUECorpus2020, a large-scale corpus that can be used directly for self-supervised learning such as pre-training of a language model, or language gen-eration. It has 100G raw corpus with 35 billion Chinese characters, which is retrieved from Common Crawl¹. [...] We download the corpus from July to December 2019 from Common Crawl. After the aforementioned filtering method, we extract the corpus of 100GB.",July to December 2019 (WARC),,,
|
11 |
+
"Andreas Giannakoulopoulos, Minas Pergantis, Nikos Konstantinou, Aristeidis Lamprogeorgos, Laida Limniati, Iraklis Varlamis – Ionian University, Corfu, Greece; Harokopio University of Athens, Athens, Greece",Exploring the Dominance of the English Language on the Websites of EU Countries,http://dx.doi.org/10.3390/fi12040076,papers,20200101Z00:00:00,,"The English language is the most dominant language in the Western world and its influence can be noticed in every aspect of human communication. It’s increasing diffusion, especially since the turn of the century, is hard to measure with conventional means. The present research studies the use of language in websites of European Union (EU) member states, in order to collect data about the prevalence of the English language in the different countries and regions of the European Union.To achieve a realistic representation of today’s landscape of the European Web, this study uses avast population of websites and a representative sampling size and methodology. By analyzing and processing the findings from over 100,000 websites from every country in the EU, a solid foundation is set that is used to explore the dominance of the English language in the European World Wide Web in general. This is the first study that examines the presence of English content in the websites of all EU member countries and provides statistical evidence regarding the ratio of English content availability for each country. Conclusively, the results of the research demonstrate that the English language is available on more than one quarter of all websites of non-English speaking EU member states.Moreover, it is available in the vast majority of multilingual and bilingual websites, while at the same time being the only language that is available in a number of monolingual websites. In addition, it is shown preference over the national language in a significant number of cases. A moderate negative correlation is found between a member state’s population and the availability of English in these countries’ websites and the same holds true for a member state’s Gross Domestic Product (GDP).Both these correlations indicate that smaller countries tend to provide more content in English in order to establish a stronger presence in the international environment. Taking into account the role of language in the expression of national identity, this study provides data and insights which may contribute to the discussion about the changes underway in the national identity of EU member states.","Ionian University, Corfu, Greece; Harokopio University of Athens, Athens, Greece","nlp/corpus-construction, web-science, socio-linguistics","The nature of the present research required as many websites as possible, so that both our total population and our sampling pool were as close a representation of reality as possible. For this purpose,we used information obtained from Common Crawl, a “repository of web crawl data that is universally accessible and analyzable” [34]. Among the data Common Crawl offers is an index of every available webpage for all member states of the EU amongst other countries. A process was developed in PHP:Hypertext Preprocessor (PHP) that used the CompounD indeX (CDX) server Application Program Interface (API) [35] to access Common Crawl’s Uniform Resource Locator (URL) index [36] and created a MariaDB database with information about websites from every member state of the EU. Although Common Crawl’s index provides all available crawled pages, our process of data collecting only focused on recording the landing page of one website per domain.",,,,
|
12 |
+
"Mukund Srinath, Shomir Wilson, C Lee Giles – Pennsylvania State University, PA, USA",Privacy at scale: Introducing the PrivaSeer corpus of web privacy policies,https://arxiv.org/abs/2004.11131,papers,20200101Z00:00:00,,,"Pennsylvania State University, PA, USA","nlp/corpus-construction, web-science, internet-security/privacy-policies","We used Common Crawl² to gather seed URLs to crawl for privacy policies from the web, as we describe in detail below. We filtered the Common Crawl URLs to get a set of possible links to web site privacy policies. We then crawled the filtered set to obtain candidate privacy policy documents. The complete pipeline from the Common Crawl URL dump to the gold standard privacy policy corpus is shown in Figure 1. [...] The Common Crawl Foundation is a non-profit which has been releasing large monthly internet web crawls since 2008. Monthly crawl archives provide a “snapshot of the web” by including re-crawls of popular domains (re-crawls from previous archives) and crawls of new domains. Common Crawl has also been releasing a domain-level webgraph from which the harmonic centrality of the crawled domains are calculated. This webgraph his used to sample popular domains that need to be re-crawled and to obtain new uncrawled domains. We downloaded the URL dump of the May, 2019 archive. Common Crawl reports that the archive contains 2.65 billion web pages or 220 TB of uncompressed content which were crawled between 19th and 27th of May, 2019. They also report that this archive contains 825 million URLs which were not contained in any previously released crawl archives. We applied a selection criteria on the downloaded URL dump to filter the URLs of likely privacy policy pages.",,,,
|
13 |
+
"Tianxi Dong, Jason Triche – Trinity University, San Antonio, TX, USA; University of Montana, MT, USA",A longitudinal analysis of job skills for entry-level data analysts,https://jise.org/Volume31/n4/JISEv31n4p312.pdf,papers,20200101Z00:00:00,,,"Trinity University, San Antonio, TX, USA; University of Montana, MT, USA","business-intelligence, nlp/corpus-construction","Our first challenge was how to collect job postings over past years because job websites do not keep historical data for more than one year. Therefore, we used the Common Crawl dataset to address this problem (http://commoncrawl.org/). Common Crawl is a non-profit organization that builds and maintains an open repository of web crawl data that is, in essence, a copy of the Internet. Common Crawl data contains over 25 billion web pages (Batikas, Claussen, and Peukert, 2018) and is widely used in hundreds of research projects (Batikas, Claussen, and Peukert, 2018; Cafarella et al., 2018). Since we were only interested in the content from Indeed.com, we only examined a very small fraction of the Common Crawl corpus.",,,,
|
14 |
+
"Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, Colin Raffel – Google; Stanford University; UC Berkeley; Northeastern University; OpenAI; Harvard University; Apple",Extracting training data from large language models,https://arxiv.org/abs/2012.07805,papers,20200101Z00:00:00,,,Google; Stanford University; UC Berkeley; Northeastern University; OpenAI; Harvard University; Apple,"ai/ethical-concerns, nlp/language-models","We follow a different data collection process as used in GPT-2 (which follows Reddit links) in order to reduce the likelihood that our dataset has any intersection with the model’s training data. In particular, we select samples from a subset of Common Crawl⁶ [⁶http://commoncrawl.org/] to feed as context to the model.⁷ [⁷It is possible there is some intersection between these two datasets, effectively allowing this strategy to “cheat”. We believe this does not considerably affect results. First, any overlap between the two datasets is rare on average. Second, because we only use the first 5 or 10 tokens of each sample, any possible overlap will be small in absolute terms.]",,,,
|
15 |
+
"Thaer Sammar, Hadi Khalilia – Palestine Technical University, Tulkarm, West Bank",Going Back in Time to Find What Existed on the Web and How much has been Preserved: How much of Palestinian Web has been Archived?,http://proceedings.sriweb.org/akn/index.php/art/article/view/410,papers,20200101Z00:00:00,,"The web is an important resource for publishing and sharing content. The main characteristic of the web is its volatility. Content is added, updated, and deleted all the time. Therefore, many national and international institutes started crawling and archiving the content of the web. The main focus of national institutes is to archive the web related to their country heritage, for example, the National Library of the Netherlands is focusing on archiving website that are of value to the Dutch heritage. However, there are still countries that haven’t taken the action to archive their web, which will result in loosing and having a gap in the knowledge. In this research, we focus on shedding the light on the Palestinian web. Precisely, how much of the Palestinian web has been archived. First, we create a list of Palestinian hosts that were on the web. For that we queried Google index exploiting the time range filter in order to get hosts overtime. We collected in 98 hosts in average in 5-years granularity from the year 1990 to 2019. We also obtained Palestinian hosts from the DMOZ directory. We collected 188 hosts. Second, we investigate the coverage of collected hosts in the Internet Archive and the Common-Crawl. We found that coverage of Google hosts in the Internet Archive ranges from 0\% to 89\% from oldest to newest time-granularity. The coverage of DMOZ hosts was 96\%. The coverage of Google hosts in the Common-Crawl 57.1\% to 74.3, while the coverage of DMOZ hosts in the Common-Crawl was in average 25\% in all crawls. We found that even the host is covered in Internet Archive and Common-Crawl, the lifespan and the number of archived versions are low.","Palestine Technical University, Tulkarm, West Bank",web-archiving/regional-coverage,,CDX index,,,
|
16 |
+
"Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, Noah A. Smith – Paul G. Allen School of Computer Science & Engineering, University of Washington, USA; Allen Institute for Artificial Intelligence, Seattle, USA",RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models,https://arxiv.org/abs/2009.11462,papers,20200101Z00:00:00,,,"Paul G. Allen School of Computer Science & Engineering, University of Washington, USA; Allen Institute for Artificial Intelligence, Seattle, USA","no-citation-misclassified, ai/ethics-of-machine-learning, ai/machine-learning, nlp/language-model",,,,,
|
17 |
+
"Xinyue Wang, Zhiwu Xie – Virginia Polytechnic Institute and State University, Blacksburg, VA, USA",The Case For Alternative Web Archival Formats To Expedite The Data-To-Insight Cycle,https://doi.org/10.1145/3383583.3398542,papers,20200101Z00:00:00,"storage management, big data analysis, web archiving, file format","The WARC file format is widely used by web archives to preserve collected web content for future use. With the rapid growth of web archives and the increasing interest to reuse these archives as big data sources for statistical and analytical research, the speed to turn these data into insights becomes critical. In this paper we show that the WARC format carries significant performance penalties for batch processing workload. We trace the root cause of these penalties to its data structure, encoding, and addressing method. We then run controlled experiments to illustrate how severe these problems can be. Indeed, performance gain of one to two orders of magnitude can be achieved simply by reformatting WARC files into Parquet or Avro formats. While these results do not necessarily constitute an endorsement for Avro or Parquet, the time has come for the web archiving community to consider replacing WARC with more efficient web archival formats.","Virginia Polytechnic Institute and State University, Blacksburg, VA, USA","web-archiving, data formats, big data, data processing, WARC, Parquet",,,,,
|
18 |
+
"Srdjan Matic, Costas Iordanou, Georgios Smaragdakis, Nikolaos Laoutaris – TU Berlin, Germany; Cyprus University of Technology, Cyprus; IMDEA Networks Institute",Identifying Sensitive URLs at Web-Scale,https://do.tu-berlin.de/handle/11303/13215,papers,20200101Z00:00:00,,"Several data protection laws include special provisions for protecting personal data relating to religion, health, sexual orientation, and other sensitive categories. Having a well-defined list of sensitive categories is sufficient for filing complaints manually, conducting investigations, and prosecuting cases in courts of law. Data protection laws, however, do not define explicitly what type of content falls under each sensitive category. Therefore, it is unclear how to implement proactive measures such as informing users, blocking trackers, and filing complaints automatically when users visit sensitive domains. To empower such use cases we turn to the Curlie.org crowdsourced taxonomy project for drawing training data to build a text classifier for sensitive URLs. We demonstrate that our classifier can identify sensitive URLs with accuracy above 88%, and even recognize specific sensitive categories with accuracy above 90%. We then use our classifier to search for sensitive URLs in a corpus of 1 Billion URLs collected by the Common Crawl project. We identify more than 155 millions sensitive URLs in more than 4 million domains. Despite their sensitive nature, more than 30% of these URLs belong to domains that fail to use HTTPS. Also, in sensitive web pages with third-party cookies, 87% of the third-parties set at least one persistent cookie.","TU Berlin, Germany; Cyprus University of Technology, Cyprus; IMDEA Networks Institute","computer-security/internet-security, privacy, GDPR, general data protection regulation","When it comes to detecting specific sensitive categories, such as those defined by GDPR: Health, Politics, Religion, Sexual Orientation, Ethnicity, our classifier achieves a high classification accuracy as well. For specific categories, such as Health (98%), Politics (92%), Religion (97%), our classifier achieves an accuracy that exceeds the basic classification accuracy between sensitive and non-sensitive URLs (88%).¶ • Applying our classifier on a Common Crawl snapshot of the English speaking Web (around 1 Billion URLs), we identify 155 million sensitive URLs in more than 4 million domains. Health, Religion, and Political Beliefs are the most popular categories with around 70 millions, 35 millions, and 32 millions URLs respectively.¶ • Looking among the identified sensitive URLs we reach the conclusion that sensitive URLs are handled as any other URL, without any special provision for the privacy of users. For example, we show that 30% of sensitive URLs are hosted in domains that fail to use HTTPS. Also, in sensitive web pages with third-party cookies, 87% of the third-parties sets at least one persistent cookie.",,,,
|
19 |
+
Sebastian Nagel – Common Crawl,Experiments using a Distributed Web Crawler to Process and Index Web Archives,https://doi.org/10.5281/zenodo.4609371,papers,20200101Z00:00:00,,,Common Crawl,"web crawling, web archiving",,,,,
|
20 |
+
"Sebastian Roth, Timothy Barron, Stefano Calzavara, Nick Nikiforakis, Ben Stock – CISPA Helmholtz Center for Information Security, Germany; Stony Brook University, USA; Università Ca’ Foscari, Venezia, Italy",Complex security policy? a longitudinal analysis of deployed content security policies,https://par.nsf.gov/biblio/10173479,papers,20200101Z00:00:00,,"The Content Security Policy (CSP) mechanism was developed as a mitigation against script injection attacks in 2010. In this paper, we leverage the unique vantage point of the Internet Archive to conduct a historical and longitudinal analysis of how CSP deployment has evolved for a set of 10,000 highly ranked domains. In doing so, we document the long- term struggle site operators face when trying to roll out CSP for content restriction and highlight that even seemingly secure whitelists can be bypassed through expired or typo domains. Next to these new insights, we also shed light on the usage of CSP for other use cases, in particular, TLS enforcement and framing control. Here, we find that CSP can be easily deployed to fit those security scenarios, but both lack wide-spread adoption. Specifically, while the underspecified and thus inconsistently implemented X-Frame-Options header is increasingly used on the Web, CSP’s well-specified and secure alternative cannot keep up. To understand the reasons behind this, we run a notification campaign and subsequent survey, concluding that operators have often experienced the complexity of CSP (and given up), utterly unaware of the easy-to-deploy components of CSP. Hence, we find the complexity of secure, yet functional content restriction gives CSP a bad reputation, resulting in operators not leveraging its potential to secure a site against the non-original attack vectors.","CISPA Helmholtz Center for Information Security, Germany; Stony Brook University, USA; Università Ca’ Foscari, Venezia, Italy","computer-security/internet-security, web-science","To determine this IA-specific influence, we chose a second archive service to corroborate the IA’s data. In particular, Common Crawl (CC) [10] has been collecting snapshots of popular sites since 2013. For each date on which we found a CSP in the IA, we queried the CC API for a matching snapshot. Overall, we found 38,129 overlapping snapshots for 940 sites. Out of these, 729 (1.9%) on 127 sites were inconsistent between the two archives. For 96 cases the difference was the lack of block-all-mixed-content or upgrade-insecure-requests in the CC data. Further investigation showed that in the IA, these directives were separated from the remaining CSP with a comma instead of a semicolon. This likely relates to the IA joining headers with the same name with a comma. For those pages, we could always only find a single CSP header in the CC response. Moreover, starting from August 2018, these sites still used the aforementioned directives in the IA data, but CC returned two CSP headers (one including only those directives). Hence, we speculate this relates to a bug in CC, which was fixed around August 2018.",,,,
|
commoncrawl_citations_annotated_2021.csv
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited
|
2 |
+
"Frankie Robertson, Jarkko Lagus, Kaisla Kajava – University of Jyväskylä, Finland; University of Helsinki, Finland",A COVID-19 news coverage mood map of Europe,https://www.aclweb.org/anthology/2021.hackashop-1.15,papers,20210101Z00:00:00,,"We present a COVID-19 news dashboard which visualizes sentiment in pandemic news coverage in different languages across Europe. The dashboard shows analyses for positive/neutral/negative sentiment and moral sentiment for news articles across countries and languages. First we extract news articles from news-crawl. Then we use a pre-trained multilingual BERT model for sentiment analysis of news article headlines and a dictionary and word vectors -based method for moral sentiment analysis of news articles. The resulting dashboard gives a unified overview of news events on COVID-19 news overall sentiment, and the region and language of publication from the period starting from the beginning of January 2020 to the end of January 2021.","University of Jyväskylä, Finland; University of Helsinki, Finland","nlp/corpus-construction, nlp/sentiment-analysis",,CC-NEWS,,,
|
3 |
+
"Jesse Dodge, Maarten Sap, Ana Marasovic, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Matt Gardner – Paul G. Allen School of Computer Science & Engineering, University of Washington, USA; Allen Institute for Artificial Intelligence, USA",Documenting large webtext corpora: a case study on the Colossal Clean Crawled Corpus,https://arxiv.org/abs/2104.08758,papers,20210101Z00:00:00,,"Large language models have led to remarkable progress on many NLP tasks, and researchers are turning to ever-larger text corpora to train them. Some of the largest corpora available are made by scraping significant portions of the internet, and are frequently introduced with only minimal documentation. In this work we provide some of the first documentation for the Colossal Clean Crawled Corpus (C4; Raffel et al., 2020), a dataset created by applying a set of filters to a single snapshot of Common Crawl. We begin by investigating where the data came from, and find a significant amount of text from unexpected sources like patents and US military websites. Then we explore the content of the text itself, and find machine-generated text (e.g., from machine translation systems) and evaluation examples from other benchmark NLP datasets. To understand the impact of the filters applied to create this dataset, we evaluate the text that was removed, and show that blocklist filtering disproportionately removes text from and about minority individuals. Finally, we conclude with some recommendations for how to created and document web-scale datasets from a scrape of the internet.","Paul G. Allen School of Computer Science & Engineering, University of Washington, USA; Allen Institute for Artificial Intelligence, USA","nlp/corpus-construction, nlp/language-model",,CC-MAIN-2019-18 (WET),"Tensorflow-C4, Huggingface-Allenai-C4-English",,
|
4 |
+
"Isaac Caswell, Julia Kreutzer, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera, Annette Rios, Isabel Papadimitriou, Salomey Osei, Pedro Javier Ortiz Suárez, Iroro Orife, Kelechi Ogueji, Rubungo Andre Niyongabo, Toan Q. Nguyen, Mathias Müller, André Müller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine Çabuk Ballı, Stella Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, Mofetoluwa Adeyemi – Google Research; Masakhane NLP; Turkic Interlingua; Haverford College; RobotsMali; Intel Labs; University of Zambia; Google; AIMS-AMMI; Inria; University of Zurich; Stanford University; Kwame Nkrumah University of Science and Technology; Sorbonne Université; Niger-Volta LTI; University of Waterloo; University of Electronic Science and Technology of China; University of Notre Dame; Bayero University Kano; University of South Florida; Hugging Face; Jacobs University Bremen; University of Moratuwa; EleutherAI; Obafemi Awolowo University; University of Ibadan; Instadeep; University of Maryland; Defence Space Administration Abuja",Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets,https://arxiv.org/abs/2103.12028,papers,20210101Z00:00:00,,"With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, web-mined text datasets covering hundreds of languages. However, to date there has been no systematic analysis of the quality of these publicly available datasets, or whether the datasets actually contain content in the languages they claim to represent. In this work, we manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4), and audit the correctness of language codes in a sixth (JW300). We find that lower-resource corpora have systematic issues: at least 15 corpora are completely erroneous, and a significant fraction contains less than 50\% sentences of acceptable quality. Similarly, we find 82 corpora that are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-speakers of the languages in question, and supplement the human judgements with automatic analyses. Inspired by our analysis, we recommend techniques to evaluate and improve multilingual corpora and discuss the risks that come with low-quality data releases.",Google Research; Masakhane NLP; Turkic Interlingua; Haverford College; RobotsMali; Intel Labs; University of Zambia; Google; AIMS-AMMI; Inria; University of Zurich; Stanford University; Kwame Nkrumah University of Science and Technology; Sorbonne Université; Niger-Volta LTI; University of Waterloo; University of Electronic Science and Technology of China; University of Notre Dame; Bayero University Kano; University of South Florida; Hugging Face; Jacobs University Bremen; University of Moratuwa; EleutherAI; Obafemi Awolowo University; University of Ibadan; Instadeep; University of Maryland; Defence Space Administration Abuja,"nlp/corpus-construction, nlp/web-as-corpus, nlp/parallel-corpus, nlp/low-resource-language","We selected the corpora for their multilinguality and the inclusion of understudied languages in NLP. With the exception of WikiMatrix and Paracrawl, all corpora are derived from CommonCrawl, and distinguish themselves by the choice of filtering methods, LangID and automatic alignment technology.",,"CCAligned-2020, Tensorflow-C4-Multilingual, OSCAR",,
|
5 |
+
"P. Kalaharsha, B. M. Mehtre – Institute for Development and Research in Banking Technology (IDRBT), Hyderabad, Indiab; School of Computer Science and Information Sciences (SCIS), University of Hyderabad, Hyderabad, India",Detecting Phishing Sites -- An Overview,https://arxiv.org/abs/2103.12739,papers,20210101Z00:00:00,,,"Institute for Development and Research in Banking Technology (IDRBT), Hyderabad, Indiab; School of Computer Science and Information Sciences (SCIS), University of Hyderabad, Hyderabad, India","computer-security/internet-security, computer-security/malicious-domain-detection",Alexa and Common crawl contains names of the legitimate sites which are likely to be used for phishing [62][63]. [63:http://index.commoncrawl.org],,,,
|
6 |
+
"Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel – Google Research",mT5: A massively multilingual pre-trained text-to-text transformer,https://arxiv.org/abs/2010.11934,papers,20210101Z00:00:00,,,Google Research,"nlp/corpus-construction, nlp/web-as-corpus, nlp/language-model","[...] we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages.",CC-MAIN-2019-18 (WET),Tensorflow-C4-Multilingual (mC4),,
|
7 |
+
"Bilal Tahir, Muhammad Amir Mehmood – University of Engineering and Technology, Lahore, Pakistan",Corpulyzer: A Novel Framework for Building Low Resource Language Corpora,https://ieeexplore.ieee.org/document/9316706,papers,20210101Z00:00:00,,,"University of Engineering and Technology, Lahore, Pakistan","nlp/corpus-construction, nlp/web-as-corpus, nlp/low-resource-language","Leveraging dataset from Common Crawl Corpus (CCC), first, we prepare a list of seed URLs by filtering the Urdu language webpages. Next, we use Corpulyzer to crawl the World-Wide-Web (WWW) over a period of four years (2016-2020). We build Urdu web corpus “UrduWeb20” that consists of 8.0 million Urdu webpages crawled from 6,590 websites. [...] building a corpus of a low-resource language from CCC is a challenging task due to: i) sampling techniques, ii) filtering of webpages of target languages, and iii) full parsing of CCC. [...] we build upon our previous approach [40] where we developed a dataset consisting of 1.28 million Urdu webpages from CCC 2016 dataset. [...] In general, CCC release meta-data as well as the crawled content where former is lightweight and easier to analyze and latter requires huge bandwidth to download and store the data. As an alternate strategy, we build three datasets using CC released data: i) CC-meta, ii) CC-Urdu-meta, and ii) CC-Urdu-crawl. First, we build CC-meta dataset to explore the impact of URL selection and crawling strategies of Common Crawl in general. This dataset consists of meta-information of 29.1 billion URLs in 11 common crawl releases from September2018 – June2019. This meta-information of each release is available in the form of compressed files (>200GB size) with information of webpage URL, MIME-type, and charset etc [94]. Next, we build CC-Urdu-meta dataset by filtering out Urdu webpages. We note that from August 2018 onward releases [95], CC also provides ISO6 language code of top three languages present in webpages after parsing HTML of the webpage from CLD2.",,,,
|
8 |
+
"Alexandra Sasha Luccioni, Joseph D. Viviano – Université de Montréal, Canada; Mila Québec AI Institute, Canada",What's in the Box? An Analysis of Undesirable Content in the Common Crawl Corpus,https://arxiv.org/abs/2105.02732,papers,20210101Z00:00:00,,,"Université de Montréal, Canada; Mila Québec AI Institute, Canada","ai/ethics-of-machine-learning, nlp/corpus-construction, nlp/text-corpora","Given its size, both downloading and analyzing the Common Crawl are time-consuming and costly endeavors. The most recent version of the Common Crawl [https://commoncrawl.org/2020/12/nov-dec-2020-crawl-archive-now-available/], dating from November/December 2020, has 2.6 billion web pages in raw text format, saved in ‘shards’ each containing of tens of thousands of pages. Given our hardware constraints, we chose to focus on a subset of the corpus, randomly sampling 1% of the files it contains, roughly amounting toroughly 81 GB of textual content or 5,835,339 webpages in total, which we analyzed in terms of hate speech, adult content, and efficacy of perplexity-based filtering. All code used in these analysis are publicly available¹ [¹https://github.com/josephdviviano/whatsinthebox]. [...] We found that the three approaches compared suggest similar proportions of websites containing hate speech: 5.24% of websites from our sample were flagged by DELIMIT, 4.02% by HateSonar,and 6.38% by the n-gram approach². [²We are conscious of the high false positive rate of n-gram approaches and therefore only consider sites to be flagged if they contain 3 or more n-grams from the list.] Qualitative analysis of a sample of sites flagged by each approach showed that while n-grams picked up on racial slurs, HateSonar picked up on debates about racial supremacy and conspiracy theories. Many of the sites that DELIMIT flagged were adult content with mentions of violent acts towards specific ethnic groups, illustrating the fine line between sexual violence and hate speech. [...] While it can be argued that the Common Crawl corpus is an accurate portrayal of the discourse of modern society – which includes sexual content, hate speech, racial biases, and gender biases – we believe that it is up for debate whether this discourse is the one that we, as a community, want to use to train the models that translate our texts, influence our search results and answer our questions. Notably, the Common Crawl overrepresents those populations that are avid users of the internet: younger, English-speaking individuals from developed countries, [...]",,,,
|
9 |
+
"Maik Fröbe, Janek Bevendorff, Lukas Gienapp, Michael Völske, Benno Stein, Martin Potthast, Matthias Hagen – Martin-Luther-Universität Halle-Wittenberg, Germany; Bauhaus-Universität Weimar, Germany; Leipzig University, Germany",CopyCat: Near-Duplicates within and between the ClueWeb and the Common Crawl,https://dl.acm.org/doi/10.1145/3404835.3463246,papers,20210101Z00:00:00,,,"Martin-Luther-Universität Halle-Wittenberg, Germany; Bauhaus-Universität Weimar, Germany; Leipzig University, Germany",ir/duplicate-detection,,"CC-MAIN-2015-11, CC-MAIN-2017-04",,,
|
10 |
+
"Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy – EleutherAI",The Pile: An 800GB Dataset of Diverse Text for Language Modeling,https://arxiv.org/abs/2101.00027,papers,20210101Z00:00:00,,"Recent work has demonstrated that increased training dataset diversity improves general cross-domain knowledge and downstream generalization capability for large-scale language models. With this in mind, we present the Pile: an 825 GiB English text corpus targeted at training large-scale language models. The Pile is constructed from 22 diverse high-quality subsets—both existing and newly constructed—many of which derive from academic or professional sources. Our evaluation of the untuned performance of GPT-2 and GPT-3 on the Pile shows that these models struggle on many of its components, such as academic writing. Conversely, models trained on the Pile improve significantly over both Raw CC and CC-100 on all components of the Pile, while improving performance on downstream evaluations. Through an in-depth exploratory analysis, we document potentially concerning aspects of the data for prospective users. We make publicly available the code used in its construction.¹ [¹https://pile.eleuther.ai/]",EleutherAI,"nlp/corpus-construction, nlp/text-corpora, nlp/language-model, nlp/text-corpora/legal-aspects","The growing need for data in language modeling has caused most existing large-scale language models to turn to the Common Crawl for most or all of their data (Brown et al., 2020; Raffel et al., 2019). While training on the Common Crawl has been effective, recent work has shown that dataset diversity leads to better downstream generalization capability (Rosset, 2019). [...] we also introduce a new filtered subset of Common Crawl, Pile-CC, with improved extraction quality. [...] 2.1 Pile-CC Common Crawl is a collection of website crawls from 2008 onwards, including raw web pages, metadata and text extractions. Due to the raw nature of the dataset, Common Crawl has the advantage of including text from diverse domains, but at the cost of varying quality data. Due to this, use of Common Crawl typically necessitates well-designed extraction and filtering. Our Common Crawl-based dataset, Pile-CC, uses jusText (Endrédy and Novák, 2013) on Web Archive files (raw HTTP responses including page HTML) for extraction, which yields higher quality output than directly using the WET files (extracted plain-text). [...] Surprisingly, raw Common Crawl performs better on the Pile BPB than CC-100, despite losing by a significant margin on LAMBADA and WikiText. We hypothesize that this is due to the perplexity based filtering used in CC-100, where a language model is trained on Wikipedia and all data with a perplexity too high or too low is discarded. This effectively discards any data too similar to or too different from Wikipedia, which severely limits the diversity of the collected data. This result suggests that future work using Common Crawl should take caution with filtering to preserve its diversity.","69 monthly crawls (WARC): CC-MAIN-2013-20 - CC-MAIN-2020-24, cf. https://github.com/leogao2/commoncrawl_downloader/blob/3a7a4a7c33aaee2a45f320f7bc57d0dcd3f3a220/indexes_20200607105929",The-Pile-English,,
|
11 |
+
"Leon Derczynski, Manuel R. Ciosici, Rebekah Baglini, Morten H. Christiansen, Jacob Aarup Dalsgaard, Riccardo Fusaroli, Peter Juel Henrichsen, Rasmus Hvingelby, Andreas Kirkedal, Alex Speed Kjeldsen, Claus Ladefoged, Finn Årup Nielsen, Jens Madsen, Malte Lau Petersen, Jonathan Hvithamar Rystrøm, Daniel Varab – ITU Copenhagen, Denmark; Aarhus University, Denmark; Danish Language Council, Denmark; TV2 Regionerne, Denmark; Karnov Group, Denmark; USC Information Sciences Institute, USA; Alexandra Institute, Denmark; University of Copenhagen, Denmark; Technical University of Denmark; Novo Nordisk, Denmark",The Danish Gigaword Corpus,https://gigaword.dk/,papers,20210101Z00:00:00,,,"ITU Copenhagen, Denmark; Aarhus University, Denmark; Danish Language Council, Denmark; TV2 Regionerne, Denmark; Karnov Group, Denmark; USC Information Sciences Institute, USA; Alexandra Institute, Denmark; University of Copenhagen, Denmark; Technical University of Denmark; Novo Nordisk, Denmark","nlp/corpus-construction, nlp/text-corpora","[...] the Danish section of Common Crawlis plagued by significant amounts of non-Danish content, in part due to the pervasive confusion between Danish and Norwegian Bokmål by highly multilingual language ID classifiers (Haas and Derczynski, 2021). Datasets derived exclusively from Common Crawl also have a bias toward webspeak and content from recent years, leaving models built over them sub-ptimally prepared to process older Danish. Common Crawl’s undirected collection of content often overrepresents some dialects at the expense of other dialects.",,,,
|
12 |
+
"Patrick Dinklage, Jonas Ellert, Johannes Fischer, Florian Kurpicz, Marvin Löbel – TU Dortmund University, Germany",Practical Wavelet Tree Construction,https://doi.org/10.1145/3457197,papers,20210101Z00:00:00,"text indexing, shared memory, external memory, distributed memory, data structures","We present new sequential and parallel algorithms for wavelet tree construction based on a new bottom-up technique. This technique makes use of the structure of the wavelet trees—refining the characters represented in a node of the tree with increasing depth—in an opposite way, by first computing the leaves (most refined), and then propagating this information upwards to the root of the tree. We first describe new sequential algorithms, both in RAM and external memory. Based on these results, we adapt these algorithms to parallel computers, where we address both shared memory and distributed memory settings.In practice, all our algorithms outperform previous ones in both time and memory efficiency, because we can compute all auxiliary information solely based on the information we obtained from computing the leaves. Most of our algorithms are also adapted to the wavelet matrix, a variant that is particularly suited for large alphabets.","TU Dortmund University, Germany","data-structures, text-indexing","Common Crawl. The Common Crawl corpus contains websites that are crawled by the Common Crawl Project. We use the WET files, which contain only the textual data of the crawled websites, i. e., no HTML tags. We also removed the meta information added by the Commoncrawl corpus. To be more precise, we used the following WET files: crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/wet/CC-MAIN-20190215183319-20190215205319-#ID.warc.wet, where #ID is in the range from 00000 to 00600. As we only care for the text, we removed the WARC meta information, i. e., each line consisting of WARC/1.0 and the following eight lines. CommonCrawl is the concatenation of all files sorted in ascending order by their ID.",CC-MAIN-2019-09 (600 WET files),,,
|
13 |
+
"Jay A. Olson, Johnny Nahas, Denis Chmoulevitch, Simon J. Cropper, Margaret E. Webb – Department of Psychology, Harvard University, Cambridge, MA, USA; Department of Psychology, McGill University, Montreal, QC, Canada; Melbourne School of Psychological Sciences, University of Melbourne, Australia",Naming unrelated words predicts creativity,https://www.pnas.org/content/118/25/e2022340118,papers,20210101Z00:00:00,,"Many traditional measures of creativity require time-intensive and subjective scoring procedures. Their scores are relative to the specific sample, which makes multicultural or international assessments difficult. Our results show that a shorter and simpler task with automatic and objective scoring may be at least as reliable at measuring verbal creativity. This finding enables assessments across larger and more diverse samples with less bias.Several theories posit that creative people are able to generate more divergent ideas. If this is correct, simply naming unrelated words and then measuring the semantic distance between them could serve as an objective measure of divergent thinking. To test this hypothesis, we asked 8,914 participants to name 10 words that are as different from each other as possible. A computational algorithm then estimated the average semantic distance between the words; related words (e.g., cat and dog) have shorter distances than unrelated ones (e.g., cat and thimble). We predicted that people producing greater semantic distances would also score higher on traditional creativity measures. In Study 1, we found moderate to strong correlations between semantic distance and two widely used creativity measures (the Alternative Uses Task and the Bridge-the-Associative-Gap Task). In Study 2, with participants from 98 countries, semantic distances varied only slightly by basic demographic variables. There was also a positive correlation between semantic distance and performance on a range of problems known to predict creativity. Overall, semantic distance correlated at least as strongly with established creativity measures as those measures did with each other. Naming unrelated words in what we call the Divergent Association Task can thus serve as a brief, reliable, and objective measure of divergent thinking.The data and algorithm code have been deposited in the Open Science Framework (https://osf.io/vjazn/).","Department of Psychology, Harvard University, Cambridge, MA, USA; Department of Psychology, McGill University, Montreal, QC, Canada; Melbourne School of Psychological Sciences, University of Melbourne, Australia","psychology/creativity, psychology/computational-scoring, nlp/word-embeddings",We chose the GloVe algorithm and the Common Crawl corpus [...],,,GloVe-word-embeddings,
|
14 |
+
"Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer – Facebook AI; University of Washington, USA",HTLM: Hyper-Text Pre-Training and Prompting of Language Models,https://arxiv.org/abs/2107.06955,papers,20210101Z00:00:00,,"We introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advan- tages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-task- adjacent supervision (e.g. class and id at- tributes often encode document category information), and (3) it allows for new structured prompting that follows the established seman- tics of HTML (e.g. to do zero-shot summarization by infilling <title> tags for a webpage that contains the input text). We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels. HTLM matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and fine-tuning for classification benchmarks, while also setting new state-of-the-art performance levels for zero-shot summarization. We also find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs, and that HTLM is highly effective at auto- prompting itself, by simply generating the most likely hypertext formatting for any available training data. We will release all code and models to support future HTLM research.","Facebook AI; University of Washington, USA","nlp/corpus-construction, nlp/text-corpora, nlp/transformer-language-model",Our HyperTextLanguageModel (HTLM) is trained on 23TB of simplified HTML which we automatically extract from common crawl dumps [...],,,,
|
15 |
+
"Julien Abadji, Pedro Javier Ortiz Suárez, Laurent Romary, Benoît Sagot – Inria, Paris, France; Sorbonne Université, Paris, France",Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus,https://ids-pub.bsz-bw.de/frontdoor/deliver/index/docId/10468/file/Abadji_Suarez_Romary_Ungoliant_2021.pdf,papers,20210101Z00:00:00,,"Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.","Inria, Paris, France; Sorbonne Université, Paris, France","nlp/corpus-construction, nlp/text-corpora",,,,,
|
16 |
+
"Guy Grossman, Stephanie Zonszein – University of Pennsylvania, USA","Voted In, Standing Out: Public Response to Immigrants' Political Accession",https://osf.io/xd4wk/,papers,20210101Z00:00:00,,"In a context of nativism and poor representation of immigrant-origin ethnic minori- ties, what is the reaction of the host society when immigrants succeed at integration in political institutions? Building on threat theory—which links minorities’ political power to hostility against minoritized groups—we argue that when they win political office, immigrants pose a threat to natives’ dominant position. This in turn triggers a hostile reaction from a violent-prone fringe, the mass public and the elites. We test these dynamics across the last four UK general elections, using hate crime police records, public opinion data, and text data from over 500,000 news articles from 350 na- tional and local newspapers. We identify the public’s hostile reactions with a regression discontinuity design that leverages close election results between minority-immigrant and dominant group candidates. Our findings suggest a public backlash against ethnic minority immigrants’ integration into majority settings.","University of Pennsylvania, USA","political science, sociology, political integration of immigrants, ethnic minorities","News articles were extracted from Common Crawl, ethnic background of candidates is constructed by the authors, and constituency characteristics from 2001 and 2011 UK Decennial Census. [...] Then, to obtain the articles published by each of these newspapers, we looked up the URLs in Common Crawl (an open repository of web crawl data containing a snapshot of every web page at the moment of the crawl). Particularly in the Index for 2020-16 crawl, the most recent crawl at that moment. We retrieved the WARC (Web ARChive format) records for each crawled page from the newspaper, and extracted the pages’ HTML. From the HTML, we extracted the text, title, and byline using the Python package readabiliPy; the publication date using the Python library htmldate; the location by tokenizing the article with CoreNLP, and looking for tokens which match place names in the Index of Place Names in Great Britain, and mapping to the corresponding constituency. Figure D.1 presents the geographical coverage of all extracted articles across constituencies.¶ [...] 4.3 Media tone toward migrant groups¶ Data We use data from over 500,000 articles from 350 national, regional and local UK newspapers, covering the general elections from 2010–2019.⁸ This data is from Common Crawl, which is an open repository of web crawl data. We assume that an article refers to a candidate’s ethnic group when three conditions are met: 1) the publication date is on election day and up to 10 months after each general election⁹, 2) the article contains mentions of terms referring to the candidate’s country or nationality of origin, which are extracted with the named entity annotator of CoreNLP and 3) such mentions co-occur in the article with a mention referring to the candidate’s constituency. The constituency is extracted by tokenizing the article with CoreNLP and looking for tokens which match place names in the Index of Place Names in Great Britain, and mapping to the corresponding constituency. Overall, this data includes almost 150,000 mentions from 156 newspapers that meet these three conditions about the candidates’ group. [...] D Newspaper data, computation of media tone measures and validation of key elements Newspaper data We construct the dataset of newspaper articles using the following steps. To determine a comprehensive list of UK newspapers, we first identified a list of seed categories on Wikipedia (WP) (e.g. ’Category:Newspapers_published_in_England’), we took the recursive items of those categories (e.g. ’Category:Newspapers_published_in_England’ > ’Category:Newspapers_published_in_London’), we used WP article properties to filter out articles about non-newspapers (e.g. people, books), and we extracted the newspaper URLs from the WP Infobox using the Python package wptools. With this process we identified a list of UK newspapers URLs containing 337 newspapers in total. Then, to obtain the articles published by each of these newspapers, we looked up the URLs in Common Crawl (an open repository of web crawl data containing a snapshot of every web page at the moment of the crawl). Particularly in the Index for 2020-16 crawl, the most recent crawl at that moment. We retrieved the WARC (Web ARChive format) records for each crawled page from the newspaper, and extracted the pages’ HTML. From the HTML, we extracted the text, title, and byline using the Python package readabiliPy; the publication date using the Python library htmldate; the location by tokenizing the article with CoreNLP, and looking for tokens which match place names in the Index of Place Names in Great Britain, and mapping to the corresponding constituency. Figure D.1 presents the geographical coverage of all extracted articles across constituencies.",CC-MAIN-2020-16,,,
|
17 |
+
"Helen Ngo, João G. M. Araújo, Jeffrey Hui, Nicholas Frosst – Cohere, Toronto, Canada",No News is Good News: A Critique of the One Billion Word Benchmark,https://arxiv.org/abs/2110.12609,papers,20210101Z00:00:00,,"The One Billion Word Benchmark is a dataset derived from the WMT 2011 News Crawl, commonly used to measure language modeling ability in natural language processing. We train models solely on Common Crawl web scrapes partitioned by year, and demonstrate that they perform worse on this task over time due to distributional shift. Analysis of this corpus reveals that it contains several examples of harmful text, as well as outdated references to current events. We suggest that the temporal nature of news and its distribution shift over time makes it poorly suited for measuring language modeling ability, and discuss potential impact and considerations for researchers building language models and evaluation datasets.","Cohere, Toronto, Canada","nlp/language-model, nlp/language-model/perplexity","Common Crawl is a repository of web scrapes of the internet updated annually and is often used as a key data source for language models built on the open web [8, 2, 1]. We train benchmark models on three distinct datasets created by selecting data sampled from different years of Common Crawl: 2013 (the year which lm1b was released), 2016, and 2020. [...] Models which are trained on datasets temporally further removed from the lm1b corpus source (i.e. WMT 2011 News Crawl dataset) exhibit higher perplexity than those trained on datasets which are temporally closer.",,,,
|
18 |
+
Leo Gao – EleutherAI,An Empirical Exploration in Quality Filtering of Text Data,https://arxiv.org/abs/2109.00698,papers,20210101Z00:00:00,,"While conventional wisdom suggests that more aggressively filtering data from low-quality sources like Common Crawl always monotonically improves the quality of training data, we find that aggressive filtering can in fact lead to a decrease in model quality on a wide array of downstream tasks for a GPT-like language model. We speculate that this is because optimizing sufficiently strongly for a proxy metric harms performance on the true objective, suggesting a need for more robust filtering objectives when attempting to filter more aggressively. We hope this work leads to detailed analysis of the effects of dataset filtering design choices on downstream model performance in future work.",EleutherAI,"nlp/language-model, nlp/corpus-construction","The recent proliferation of ever larger language models has led to increasing demands on training data (Radford et al., 2018, 2019; Gokaslan and Cohen, 2019; Rosset, 2019; Shoeybi et al., 2019; Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2020; Brown et al., 2020; Zeng et al., 2021). This data is increasingly derived from internet corpora like Common Crawl (Radford et al., 2019; Ortiz Suárez et al., 2019; Wenzek et al., 2020; Conneau et al., 2020; Brown et al., 2020; Gao et al., 2020; Raffel et al., 2020). However, the quality of raw Common Crawl data is often insufficient to be directly used. To combat this, many existing works use some kind of proxy for quality, like a classifier between known high quality data and low quality data (Brown et al., 2020; Gao et al., 2020; Zeng et al., 2021), handcrafted heuristics (Yang et al., 2020; Raffel et al., 2020), or keeping only documents with perplexity scores that fall in some middle quantile of an existing language model (Wenzek et al., 2020). Brown et al. (2020) in particular filter extremely aggres- sively using their classifier, discarding about 98.7% of their data. Previous work has shown that models trained on heuristic-filtered datasets perform better on downstream tasks (Raffel et al., 2020). However, Gao et al. (2020) show that a perplexity-filtered CC- derived dataset actually performs worse than unfiltered CC on certain tasks. [...] We hypothesize that this decline in performance is because of misalignment between the classifier objective, intended to be a proxy for quality, and actual document quality. For instance, a classifier to distinguish WebText2 from Common Crawl, as in GPT-3, would also exclude domains of text data not found as often in WebText2.",,,,
|
19 |
+
"Abeba Birhane, Vinay Uday Prabhu, Emmanuel Kahembwe – University College Dublin, Ireland; Lero, Dublin, Ireland; University of Edinburgh, UK","Multimodal datasets: misogyny, pornography, and malignant stereotypes",https://arxiv.org/abs/2110.01963,papers,20210101Z00:00:00,,"We have now entered the era of trillion parameter machine learning models trained on billion-sized datasets scraped from the internet. The rise of these gargantuan datasets has given rise to formidable bodies of critical work that has called for caution while generating these large datasets. These address concerns surrounding the dubious curation practices used to generate these datasets, the sordid quality of alt-text data available on the world wide web, the problematic content of the CommonCrawl dataset often used as a source for training large language models, and the entrenched biases in large-scale visio-linguistic models (such as OpenAI's CLIP model) trained on opaque datasets (WebImageText). In the backdrop of these specific calls of caution, we examine the recently released LAION-400M dataset, which is a CLIP-filtered dataset of Image-Alt-text pairs parsed from the Common-Crawl dataset. We found that the dataset contains, troublesome and explicit images and text pairs of rape, pornography, malign stereotypes, racist and ethnic slurs, and other extremely problematic content. We outline numerous implications, concerns and downstream harms regarding the current state of large scale datasets while raising open questions for various stakeholders including the AI community, regulators, policy makers and data subjects.","University College Dublin, Ireland; Lero, Dublin, Ireland; University of Edinburgh, UK","ai/ethics-of-machine-learning, nlp/corpus-construction, nlp/text-corpora, nlp/multimodal-corpora","1.3 The Common-Crawl Common Crawl is a San Francisco based nonprofit 501(c)(3) organization that has been regularly crawling the entire WWW and generating archival snapshot data-dumps, often termed the Common- Crawl (CC) datasets in machine learning lexicon, since 2011. The current version of this archive (dated April 2021) is roughly 320 TB in size and spans 3.1 billion pages. The sheer scale of this dataset has an enduring allure in the AI community and has been used as a seeding dataset in training pipelines of high-profile projects⁵ [⁵https://commoncrawl.org/the-data/examples/] such as GPT-3 [34], CLUECorpus2020 [35], and XLM-R [36]. Inevitably this gargantuan dataset mined from the WWW suffers from serious issues. For instance, Matic et al. [37] used the Curlie.org crowdsourced taxonomy project to train a GDPR-Article(9)-Sensitive-URL classifier which revealed that, of the 1 Billion URLs they audited in the Common Crawl project, 155 million URLs fell into the sensitive category. The Realtoxicityprompts work [38] revealed that CommonCrawl contained over 300,000 documents from unreliable news sites and banned subReddit pages containing hate speech and racism. More recently, Luccioni and Viviano’s initial study [39] placed the ‘Hate speech’ content level to be around 4.02%-5.24% (the 1+ hate n-grams level was estimated higher at 17.78%). With regards to CCAligned, a 119- language parallel dataset built off 68 snapshots of Common Crawl, Caswell et al. [40] revealed that there were notable amounts of pornographic content (> 10%) found for 11 languages with prevalence rates being as high as 24% for language pairs such as en-om_KE. The LAION-400M dataset emerges from this landscape containing hundreds of millions of Image- Alt-text pairs parsed from the Common-Crawl dataset and filtered using a previously Common-Crawl trained AI model (CLIP [2]). With this background, we present our findings following our initial audit of the LAION-400M dataset below.",,,,
|
20 |
+
"Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, Aran Komatsuzaki – LAION.ai; Gentec Data, Romania; Technical University of Munich, Germany; Juelich Supercomputing Center, Germany; Georgia Institute of Technology; USA; EleutherAI",LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs,https://arxiv.org/abs/2111.02114,papers,20210101Z00:00:00,,"Multi-modal language-vision models trained on hundreds of millions of image-text pairs (e.g. CLIP, DALL-E) gained a recent surge, showing remarkable capability to perform zero- or few-shot learning and transfer even in absence of per-sample labels on target image data. Despite this trend, to date there has been no publicly available datasets of sufficient scale for training such models from scratch. To address this issue, in a community effort we build and release for public LAION-400M, a dataset with CLIP-filtered 400 million image-text pairs, their CLIP embeddings and kNN indices that allow efficient similarity search.","LAION.ai; Gentec Data, Romania; Technical University of Munich, Germany; Juelich Supercomputing Center, Germany; Georgia Institute of Technology; USA; EleutherAI","nlp/corpus-construction, nlp/multimodal-corpora","2.1 Distributed processing of Common Crawl¶ To create image-text pairs, we parse through WAT files from Common Crawl and parse out all HTML IMG tags containing an alt-text attribute. We download the raw images from the parsed URLs with asynchronous requests using Trio and Asks libraries.¶ 2.1.1 Filtering out unsuitable image-text pairs¶ After downloading the WAT files from Common Crawl, we apply the following filtering conditions: • All samples with less than 5 character alt-text length or less than 5 KB image size are dropped.¶ • Duplicate removal is performed with bloom filter based on URL and alt-text.¶ • We use CLIP to compute embeddings of the image and alt-text. Then we compute the cosine similarity of both embeddings and drop all samples with cosine similarity below 0.3. This threshold was selected based on human inspections.¶ • We use the CLIP embeddings of images and texts to filter out illegal contents.",,LAION-400M,,
|
21 |
+
"Michael Bugert, Iryna Gurevych – Ubiquitous Knowledge Processing Lab (UKP), Technical University of Darmstadt, Germany",Event Coreference Data (Almost) for Free: Mining Hyperlinks from Online News,https://aclanthology.org/2021.emnlp-main.38,papers,20210101Z00:00:00,,"Cross-document event coreference resolution (CDCR) is the task of identifying which event mentions refer to the same events throughout a collection of documents. Annotating CDCR data is an arduous and expensive process, explaining why existing corpora are small and lack domain coverage. To overcome this bottleneck, we automatically extract event coreference data from hyperlinks in online news: When referring to a significant real-world event, writers often add a hyperlink to another article covering this event. We demonstrate that collecting hyperlinks which point to the same article(s) produces extensive and high-quality CDCR data and create a corpus of 2M documents and 2.7M silver-standard event mentions called HyperCoref. We evaluate a state-of-the-art system on three CDCR corpora and find that models trained on small subsets of HyperCoref are highly competitive, with performance similar to models trained on gold-standard data. With our work, we free CDCR research from depending on costly human-annotated training data and open up possibilities for research beyond English CDCR, as our data extraction approach can be easily adapted to other languages.","Ubiquitous Knowledge Processing Lab (UKP), Technical University of Darmstadt, Germany","nlp/coreference resolution, event detection","To this end, we devise a data extraction pipeline which mines such datasets automatically from Common Crawl² [²https://commoncrawl.org/] and apply it to create the HYPERCOREF corpus, consisting of 40 news outlets with over 2M mentions in total, far exceeding the size of existing CDCR corpora.",,,,
|
22 |
+
"Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Dmytro Okhonko, Samuel Broscheit, Gautier Izacard, Patrick Lewis, Barlas Oğuz, Edouard Grave, Wen-tau Yih, Sebastian Riedel – Facebook AI Research; University College London, United Kingdom; University of Mannheim, Germany; ENS, PSL University, France; Inria, France; University of Washington, United States",The Web Is Your Oyster - Knowledge-Intensive NLP against a Very Large Web Corpus,https://arxiv.org/abs/2112.09924,papers,20210101Z00:00:00,"Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Information Retrieval (cs.IR), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences","In order to address increasing demands of real-world applications, the research for knowledge-intensive NLP (KI-NLP) should advance by capturing the challenges of a truly open-domain environment: web-scale knowledge, lack of structure, inconsistent quality and noise. To this end, we propose a new setup for evaluating existing knowledge intensive tasks in which we generalize the background corpus to a universal web snapshot. We investigate a slate of NLP tasks which rely on knowledge - either factual or common sense, and ask systems to use a subset of CCNet - the Sphere corpus - as a knowledge source. In contrast to Wikipedia, otherwise a common background corpus in KI-NLP, Sphere is orders of magnitude larger and better reflects the full diversity of knowledge on the web. Despite potential gaps in coverage, challenges of scale, lack of structure and lower quality, we find that retrieval from Sphere enables a state of the art system to match and even outperform Wikipedia-based models on several tasks. We also observe that while a dense index can outperform a sparse BM25 baseline on Wikipedia, on Sphere this is not yet possible. To facilitate further research and minimise the community's reliance on proprietary, black-box search engines, we share our indices, evaluation metrics and infrastructure.","Facebook AI Research; University College London, United Kingdom; University of Mannheim, Germany; ENS, PSL University, France; Inria, France; University of Washington, United States","nlp/question-answering, nlp/knowledge-intensive-tasks, ai/knowledge-base","[…] CCNet processes Common Crawl by performing deduplication, language identification and quality filtering (articles are split into three quality tiers: head, […] We pick the CCNet snapshot corresponding to the August 2019 Common Crawl […]",,,CCNet,
|
23 |
+
"Metod Jazbec, Barna Pàsztor, Felix Faltings, Nino Antulov-Fantulin, Petter N. Kolm – ETH Zurich, Switzerland; New York University, New York, USA",On the Impact of Publicly Available News and Information Transfer to Financial Markets,https://royalsocietypublishing.org/doi/10.1098/rsos.202321,papers,20210101Z00:00:00,,"We quantify the propagation and absorption of large-scale publicly available news articles from the World Wide Web to financial markets. To extract publicly available information, we use the news archives from the Common Crawl, a non-profit organization that crawls a large part of the web. We develop a processing pipeline to identify news articles associated with the constituent companies in the S&P 500 index, an equity market index that measures the stockperformance of US companies. Using machine learning techniques, we extract sentiment scores from the Common Crawl News data and employ tools from information theory to quantify the information transfer from public news articles to the US stock market. Furthermore, we analyse and quantify the economic significance of the news-based information with a simple sentiment-based portfolio trading strategy. Our findings provide support for that information in publicly available news on the World Wide Web has a statistically and economically significant impact on events infinancial markets.","ETH Zurich, Switzerland; New York University, New York, USA","statistical-finance, ai/machine-learning, nlp/sentiment-analysis, financial-markets",,,,,
|
24 |
+
"Daniel Varab, Natalie Schluter – IT University of Copenhagen, Denmark","MassiveSumm: a very large-scale, very multilingual, news summarisation dataset",https://aclanthology.org/2021.emnlp-main.797,papers,20210101Z00:00:00,,"Current research in automatic summarisation is unapologetically anglo-centered{--}a persistent state-of-affairs, which also predates neural net approaches. High-quality automatic summarisation datasets are notoriously expensive to create, posing a challenge for any language. However, with digitalisation, archiving, and social media advertising of newswire articles, recent work has shown how, with careful methodology application, large-scale datasets can now be simply gathered instead of written. In this paper, we present a large-scale multilingual summarisation dataset containing articles in 92 languages, spread across 28.8 million articles, in more than 35 writing scripts. This is both the largest, most inclusive, existing automatic summarisation dataset, as well as one of the largest, most inclusive, ever published datasets for any NLP task. We present the first investigation on the efficacy of resource building from news platforms in the low-resource language setting. Finally, we provide some first insight on how low-resource language settings impact state-of-the-art automatic summarisation system performance.","IT University of Copenhagen, Denmark","nlp/text-summarization, nlp/corpus-construction","Comparing with web-scrape multilingual datasets. We compared the intersection of our dataset with two large-scale web datasets widely used by the NLP community: Wikipedia⁴ [⁴https://en.wikipedia.org/wiki/List_of_Wikipedias#Edition_details as of May 10 2021] and Common Crawl⁵ [⁵April 2021 crawl CC-MAIN-2021-04 https://commoncrawl.github.io/cc-crawl-statistics/plots/languages.csv]. An overview of this comparison can be found in Table 4. The manual care that we took in curating the list of platforms from which we wanted to collect data resulted in more data from an improved diversity of languages. For 52 of our languages, MS-All either matches or surpasses the number of Wikipedia pages for the language in question, showing the importance of the full dataset simply as raw data. In fact, the majority of MassiveSumm languages from South Saharan Africa (14/18) have more documents in MS-All than in Wikipedia. And well over half of the MassiveSumm languages for Eurasia (38/63) have more documents in MS-All than in Wikipedia. Turning to Common Crawl, almost half of the languages from South Saharan Africa (8/18) have more pages in MS-All than in Common Crawl. Six out of 63 Eurasian languages have more articles in MS-All than in Common Crawl. When we consider even just the heavily filtered automatic summarisation portion of the data, MS, we find that 10 of the South Saharan African lan- guages contain more pages than Wikipedia, and 5 out of 18 of these languages contain more data than Common Crawl. For Eurasia, 19 of the 63 languages contain more pages than Wikipedia. Table 5 gives the proportions of the articles in MS-All that are also contained in Common Crawl, for those languages where more than 49\% can be obtained. This is 18 languages–around a fifth of the languages represented by MassiveSumm. Hence observe that large portions of easily indexible and crawlable, publicly available, diverse linguistic data are not being scraped into one of the most important datasets for NLP, both in size, but in determining to a large extent which languages get mainstream NLP research: Common Crawl. 5 Reflections on Low-Resource Language Automatic Summarisation The central datasets for automatic summarisation have consistently been for English. In this section we consider how this focus on English has resulted in limited dataset curation methodology development (Section 5.1) and limited automatic summarisation system design (Section 5.2).",,,,
|
25 |
+
Sebastian Nagel – Common Crawl,From web graphs to prioritizing web crawls,https://doi.org/10.5281/zenodo.6044920,papers,20210101Z00:00:00,,,Common Crawl,"web crawling, web-science/hyperlinkgraph",,,,,
|
26 |
+
"Simran Khanuja, Diksha Bansal, Sarvesh Mehtani, Savya Khosla, Atreyee Dey, Balaji Gopalan, Dilip Kumar Margam, Pooja Aggarwal, Rajiv Teja Nagipogu, Shachi Dave, Shruti Gupta, Subhash Chandra Bose Gali, Vish Subramanian, Partha Talukdar – Google; Indian Institute of Technology, Patna, India; Indian Institute of Technology, Bombay, India; Delhi Technological University, India",MuRIL: Multilingual Representations for Indian Languages,https://arxiv.org/abs/2103.10730,papers,20210101Z00:00:00,,,"Google; Indian Institute of Technology, Patna, India; Indian Institute of Technology, Bombay, India; Delhi Technological University, India","nlp/language-model, nlp/corpus-construction",Monolingual Data: We collect monolingual data for the 17 languages mentioned above from the Common Crawl OSCAR corpus¹ and Wikipedia².,,,OSCAR,
|
27 |
+
"Michael Völske, Janek Bevendorff, Johannes Kiesel, Benno Stein, Maik Fröbe, Matthias Hagen, Martin Potthast – Bauhaus-Universität Weimar, Germany; Martin-Luther-Universität Halle-Wittenberg, Germany; Leipzig University, Germany",Web Archive Analytics,https://dl.gi.de/handle/20.500.12116/34759,papers,20210101Z00:00:00,,"Web archive analytics is the exploitation of publicly accessible web pages and their evolution for research purposes—to the extent organizationally possible for researchers. In order to better understand the complexity of this task, the first part of this paper puts the entirety of the world's captured, created, and replicated data (the “Global Datasphere”) in relation to other important data sets such as the public internet and its web pages, or what is preserved thereof by the Internet Archive. Recently, the Webis research group, a network of university chairs to which the authors belong, concluded an agreement with the Internet Archive to download a substantial part of its web archive for research purposes. The second part of the paper in hand describes our infrastructure for processing this data treasure: We will eventually host around 8 PB of web archive data from the Internet Archive and Common Crawl, with the goal of supplementing existing large scale web corpora and forming a non-biased subset of the 30 PB web archive at the Internet Archive.","Bauhaus-Universität Weimar, Germany; Martin-Luther-Universität Halle-Wittenberg, Germany; Leipzig University, Germany","web-archiving, big data, data processing","In the Webis research group, we aim to store up to 8 PB of web archive data on our own premises, much of it originating from the Internet Archive, but also from other sources, such as the Common Crawl. [...] As of October 2020, almost 2.3 PB of data—of which 560 TB stem from the Internet Archive and the rest from the Common Crawl—have been downloaded and are stored on our infrastructure.",,,,
|
commoncrawl_citations_annotated_2022.csv
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited
|
2 |
+
"Vésteinn Snæbjarnarson, Haukur Barri Símonarson, Pétur Orri Ragnarsson, Svanhvít Lilja Ingólfsdóttir, Haukur Páll Jónsson, Vilhjálmur Þorsteinsson, Hafsteinn Einarsson – Miðeind ehf., Iceland; University of Iceland, Iceland",A Warm Start and a Clean Crawled Corpus -- A Recipe for Good Language Models,https://arxiv.org/abs/2201.05601,papers,20220101Z00:00:00,,"We train several language models for Icelandic, including IceBERT, that achieve state-of-the-art performance in a variety of downstream tasks, including part-of-speech tagging, named entity recognition, grammatical error detection and constituency parsing. To train the models we introduce a new corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection of high quality texts found online by targeting the Icelandic top-level-domain (TLD). Several other public data sources are also collected for a total of 16GB of Icelandic text. To enhance the evaluation of model performance and to raise the bar in baselines for Icelandic, we translate and adapt the WinoGrande dataset for co-reference resolution. Through these efforts we demonstrate that a properly cleaned crawled corpus is sufficient to achieve state-of-the-art results in NLP applications for low to medium resource languages, by comparison with models trained on a curated corpus. We further show that initializing models using existing multilingual models can lead to state-of-the-art results for some downstream tasks.","Miðeind ehf., Iceland; University of Iceland, Iceland","nlp/corpus-construction, nlp/language-model","3.1. The Icelandic Common Crawl Corpus¶ The Common Crawl Foundation is a non-profit organization that scrapes large semi-random subsets of the internet regularly and hosts timestamped and compressed dumps of the web online¹⁰ [¹⁰https://commoncrawl.org/the-data/get-started/]. Each dump contains billions of web pages occupying hundreds of terabytes. Parsing these files directly requires storage and computing power not directly available to most and can come at a significant financial cost. The foundation also hosts indices of URIs and their locations within the large zipped dump files. While these indices are also large, their processing is feasible with a few terabytes of storage.¶ 3.1.1. Extracting Icelandic Common Crawl data¶ The Common Crawl indices, which contain URI and byte offsets within the compressed dumps, are used to reduce the search space when looking for Icelandic texts. The Common Crawl Index Server has a public API¹¹ [¹¹https://index.commoncrawl.org/] where URIs can be queried based on attributes such as date, MIME-type and substring. Using the API eliminates the need to fetch the massive index files. To extract Icelandic, the .is pattern is targeted to match the Icelandic top level domain (TLD), resulting in 63.5 million retrieved pages with URIs and byte locations within the compressed Common Crawl dumps. The computational efficiency of our method can be attributed to these steps. Given the predominant use of the .is TLD for Icelandic web content, we assume that other TLDs have a much lower proportion of Icelandic content. That said, a nontrivial amount of text in Icelandic is still likely to be found outside the .is domain and could be extracted by, e.g., parsing the whole Common Crawl, albeit at a much higher computational cost.¶ By targeting only the byte-offsets corresponding to the Icelandic TLD we extract candidate websites that have a high proportion of Icelandic content. In total, the compressed content is 687GiB on disk. All dumps since the start of the Common Crawl in 2008 until March 2020 were included.¶ Plain text was extracted from the collected WARC (Web Archive format) files using jusText (Pomikálek, 2011)12 to remove boilerplate content and HTML tags.","CDX, WARC, ARC 2008 – March 2020",,,
|
3 |
+
"Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri, Olatz Perez-de-Viñaspre, Aitor Soroa – Meta AI; HiTZ Center - Ixa, University of the Basque Country UPV/EHU",Does Corpus Quality Really Matter for Low-Resource Languages?,https://arxiv.org/abs/2203.08111,papers,20220101Z00:00:00,"Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences","The vast majority of non-English corpora are derived from automatically filtered versions of CommonCrawl. While prior work has identified major issues on the quality of these datasets (Kreutzer et al., 2021), it is not clear how this impacts downstream performance. Taking Basque as a case study, we explore tailored crawling (manually identifying and scraping websites with high-quality content) as an alternative to filtering CommonCrawl. Our new corpus, called EusCrawl, is similar in size to the Basque portion of popular multilingual corpora like CC100 and mC4, yet it has a much higher quality according to native annotators. For instance, 66% of documents are rated as high-quality for EusCrawl, in contrast with <33% for both mC4 and CC100. Nevertheless, we obtain similar results on downstream tasks regardless of the corpus used for pre-training. Our work suggests that NLU performance in low-resource languages is primarily constrained by the quantity rather than the quality of the data, prompting for methods to exploit more diverse data sources.","Meta AI; HiTZ Center - Ixa, University of the Basque Country UPV/EHU","nlp/corpus-construction, nlp/corpus-representativeness, nlp/corpus-quality, nlp/language-models, nlp/low-resource-languages","In this paper, we explore tailored crawling (i.e., manually identifying and scraping websites with high-quality content) as an alternative to filtering CommonCrawl. Taking Basque as a case study, we collect 12.5M documents from 33 websites with Creative Commons content. The resulting corpus, called EusCrawl, is similar in size to the Basque portion of CC100 and mC4, but it has substantially less issues and a higher perceived quality according to our blind audit with native annotators. However, we find that this improvement does not carry over to downstream tasks, as masked language models pre-trained on either corpora obtain similar results on 5 NLU benchmarks. Our results suggests that data quantity and domain play a more important role, prompting for methods to exploit more diverse sources of data in low-resource languages.",,,,
|
4 |
+
"Stella Biderman, Kieran Bicheno, Leo Gao – EleutherAI",Datasheet for the Pile,https://arxiv.org/abs/2201.07311,papers,20220101Z00:00:00,,"This datasheet describes the Pile, a 825 GiB dataset of human-authored text compiled by EleutherAI for use in large-scale language modeling. The Pile is comprised of 22 different text sources, ranging from original scrapes done for this project, to text data made available by the data owners, to third-party scrapes available online.",EleutherAI,"nlp/corpus-construction, nlp/corpus-datasheet, nlp/corpus-representativeness","Pile-CC: The Pile-CC dataset is a sample from the Common Crawl WARCs that has been converted to text using jusText [Endrédy and Novák, 2013].¶ [...] Pile-CC: The Pile-CC dataset was created to be included in the Pile. The underlying data comes from the Common Crawl, which was created to give people access to the wealth of information contained in the internet. Its creators were concerned that only data mining companies would be able to collect this data, and has the explicit aim of democratizing technology.¶ [...] Pile-CC: The data is sourced from Common Crawl, a non-profit 501(c)(3) organization founded by Gil Elbaz. The data from Common Crawl was processed by EleutherAI into Pile-CC.¶ [...] Pile-CC: Instances are webpages.¶ [...] Pile-CC: 54, 953, 117 documents, totaling 227.12 GiB.¶ [...] Pile-CC: A tiny fraction of the entire Common Crawl was included, chosen arbitrarily and heavily filtered as detailed in Gao et al. [2020].¶ [...] Pile-CC: Data in the Pile-CC dataset were scraped from websites by the Common Craw and then downloaded directly from the Common Craw by EleutherAI.¶ [...] Pile-CC: The earliest date of contents in Pile-CC is unknown.¶",,The-Pile-English,,
|
5 |
+
"Jonas Andersson Schwarz – Göteborgs Universitet, Sweden","The hitchhiker's guide Method handbook for quantification of online linguistic data in a country-specific context. Official research report, Linguistic Explorations of Societies (Work Package 1)",https://gupea.ub.gu.se/bitstream/handle/2077/70890/2022_1_Andersson%20Schwarz.pdf,papers,20220101Z00:00:00,,,"Göteborgs Universitet, Sweden","nlp/corpus-construction, nlp/corpus-representativeness","Central actors (in no particular order)¶ CommonCrawl. California-based non-profit organization that makes monthly crawls of the openly available Web and provides datasets and metadata to the public freely. The CommonCrawl corpus contains petabytes of data including raw web page data, metadata data and text data collected since 2011. Since 2012, CommonCrawl’s archive is hosted by Amazon Web Services as part of its Public Data Sets program. Every crawl contains around 300 terabytes of data and roughly 3 billion pages. In 2020, a filtered version of this CommonCrawl archive was used to train OpenAI’s GPT-3 language model.¶ [...] Similarly, CommonCrawl (2021) provides an aggregate listing the percentages of their database covered by each language – measured as the primary language of each html document, as identified by the Compact Language Detector 2 (CLD2) algorithm. This was included as a good benchmark to compare with.¶ [...] In comparison, when plotting the cur- rently stated language distribution of CommonCrawl (2021) in relation to the same population numbers of L1 and L2 speakers, the CommonCrawl distribution displays a similarly low kurtosis and skewness.",,,,
|
6 |
+
"Makoto Morishita, Katsuki Chousa, Jun Suzuki, Masaaki Nagata – NTT Communication Science Laboratories, NTT Corporation, Japan",JParaCrawl v3.0: A Large-scale English-Japanese Parallel Corpus,https://arxiv.org/abs/2202.12607,papers,20220101Z00:00:00,,"Most current machine translation models are mainly trained with parallel corpora, and their translation accuracy largely depends on the quality and quantity of the corpora. Although there are billions of parallel sentences for a few language pairs, effectively dealing with most language pairs is difficult due to a lack of publicly available parallel corpora. This paper creates a large parallel corpus for English-Japanese, a language pair for which only limited resources are available, compared to such resource-rich languages as English-German. It introduces a new web-based English-Japanese parallel corpus named JParaCrawl v3.0. Our new corpus contains more than 21 million unique parallel sentence pairs, which is more than twice as many as the previous JParaCrawl v2.0 corpus. Through experiments, we empirically show how our new corpus boosts the accuracy of machine translation models on various domains. The JParaCrawl v3.0 corpus will eventually be publicly available online for research purposes.","NTT Communication Science Laboratories, NTT Corporation, Japan","nlp/machine-translation, nlp/parallel-corpus, nlp/corpus-construction","Our method extracts parallel sentences from the web. Thus, the first step is finding a website that has parallel sentences. This method is based on the hypothesis that websites containing the same English and Japanese sentences might have parallel texts. To list such parallel websites, we analyzed all the Common Crawl text archive data released from March 2019 to August 2021³. [³During this period, the Common Crawl project released 25 archives, and their text size was about 212 TB.] We identified the language in the archive by CLD2⁴ [⁴ https://github.com/CLD2Owners/cld2] and listed 100,000 large websites that roughly have the same size of English and Japanese texts. For this step, we used extractor⁵ [⁵ 5https://github.com/paracrawl/extractor] that was provided by the ParaCrawl project.",,,,
|
7 |
+
"Imad LAKIM, Ebtesam Almazrouei, Ibrahim Abu Alhaol, Merouane Debbah, Julien Launay – TII, Abu Dhabi, Arabic Emirates; LightOn, Paris, France","A Holistic Assessment of the Carbon Footprint of Noor, a Very Large Arabic Language Model",https://openreview.net/forum?id=B-lS3zH8Zq,papers,20220101Z00:00:00,,"As ever larger language models grow more ubiquitous, it is crucial to consider their environmental impact. Characterised by extreme size and resource use, recent generations of models have been criticised for their voracious appetite for compute, and thus significant carbon footprint. Although reporting of carbon impact has grown more common in machine learning papers, this reporting is usually limited to compute resources used strictly for training. In this work, we propose a holistic assessment of the footprint of an extreme-scale language model, Noor. Noor is an ongoing project aiming to develop the largest multi-task Arabic language models--with up to 13B parameters--leveraging zero-shot generalisation to enable a wide range of downstream tasks via natural language instructions. We assess the total carbon bill of the entire project: starting with data collection and storage costs, including research and development budgets, pretraining costs, future serving estimates, and other exogenous costs necessary for this international cooperation. Notably, we find that inference costs and exogenous factors can have a significant impact on total budget. Finally, we discuss pathways to reduce the carbon footprint of extreme-scale models.","TII, Abu Dhabi, Arabic Emirates; LightOn, Paris, France","nlp/language-model, nlp/transformer-language-model, carbon-footprint","We use Common Crawl (CC) for acquiring large amounts of web data. Each CC dump is on average around 10TB, and we discard it immediately after processing it. On average, it takes 24 hours to fully process a dump: we used 21 dumps from CC, meaning we stored 210TB of data for 24hours, equivalent to 57 kWh of energy consumption. After processing the dumps, we got on average 1.2TB of data per dump, thus 25TB in total. Considering that this data will be stored for 6 months, we end up with 1.3 MWh of energy consumption for the bulk data. Note that we keep the processed data in all languages (not just Modern Standard Arabic).",,,,
|
8 |
+
"Asier Gutiérrez-Fandiño, David Pérez-Fernández, Jordi Armengol-Estapé, David Griol, Zoraida Callejas – LHF Labs; Universidad Autónoma de Madrid, Spain; University of Edinburgh, United Kingdom; Universidad de Granada, Spain",esCorpius: A Massive Spanish Crawling Corpus,https://ui.adsabs.harvard.edu/abs/2022arXiv220615147G,papers,20220101Z00:00:00,"Computer Science - Computation and Language, Computer Science - Artificial Intelligence",,"LHF Labs; Universidad Autónoma de Madrid, Spain; University of Edinburgh, United Kingdom; Universidad de Granada, Spain","nlp/corpus-construction, nlp/text-corpora","[…] In this paper, we introduce esCorpius, a Spanish crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in Spanish with this level of quality in the extraction, purification and deduplication of web textual content […] A total of 39,502 compressed WARC (Web Archive) from Common Crawl files were processed (see section 3.3 for more details). The compressed information occupied about 180 TB and the size of the processed decompressed information is estimated to be more than 0.8 PB. Prior to content deduplication, the downloaded corpus was composed of 106.768.594.753 words, 3.129.248.875 lines and 163.518.405 web pages. The deduplicated and cleaned corpus size is 346.262.072.705 bytes (322.5 GB), with 104.073.706 total number of lines, 50.040.055.322 tokens, 1.125.798.968 paragraphs and 2.421.598.201 sentences.",,,,
|
9 |
+
"Arnold Overwijk, Chenyan Xiong, Jamie Callan – Microsoft; Carnegie Mellon University",ClueWeb22: 10 Billion Web Documents with Rich Information,https://doi.org/10.1145/3477495.3536321,papers,20220101Z00:00:00,"clueweb, web corpus, dataset","ClueWeb22, the newest iteration of the ClueWeb line of datasets, is the result of more than a year of collaboration between industry and academia. Its design is influenced by the research needs of the academic community and the real-world needs of large-scale industry systems. Compared with earlier ClueWeb datasets, the ClueWeb22 corpus is larger, more varied, and has higher-quality documents. Its core is raw HTML, but it includes clean text versions of documents to lower the barrier to entry. Several aspects of ClueWeb22 are available to the research community for the first time at this scale, for example, visual representations of rendered web pages, parsed structured information from the HTML document, and the alignment of document distributions (domains, languages, and topics) to commercial web search.This talk shares the design and construction of ClueWeb22, and discusses its new features. We believe this newer, larger, and richer ClueWeb corpus will enable and support a broad range of research in IR, NLP, and deep learning.",Microsoft; Carnegie Mellon University,"cc-cited-not-used, nlp/corpus-construction, nlp/text-corpora, information-retrieval","One approach is to sift CommonCrawl data, eg, the C4 dataset used to pretrain T5 [10], which provides sufficient quantity, but the quality quickly becomes a concern. For example, the cleaned CommonCrawl reflects a quite weird distribution of the web [5]. Language models pretrained on C4 often perform worse than models pretrained on higher quality corpora at the same scale. With ClueWeb22, we aim to provide the web corpus for research in the near future. The design of ClueWeb22 emphasizes on these goals: 1) to reflect the distribution of the web in real scenarios; 2) to provide web pages at large quantity and also with high quality; 3) to enable new research directions by including information important in industry but previously not publicly available.",,,,
|
10 |
+
"Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, Luke Zettlemoyer – Meta AI",OPT: Open Pre-trained Transformer Language Models,https://arxiv.org/abs/2205.01068,papers,20220101Z00:00:00,,,Meta AI,"nlp/language-model, nlp/transformer-language-model, nlp/corpus-construction",,,,"CC-Stories, Pile-CC, CC-NEWS-RoBERTa-v2",
|
11 |
+
"Sylvain Lugeon, Tiziano Piccardi, Robert West – EPFL, Switzerland",Homepage2Vec: Language-Agnostic Website Embedding and Classification,https://ojs.aaai.org/index.php/ICWSM/article/download/19380/19152,papers,20220101Z00:00:00,,"Top-level domain. Some top-level domains (TLD) such as .edu or .biz can offer a good hint about the website's content. For example, a typical use case for .edu is university websites, whereas .biz is commonly associated with business activities. Following this intuition, we collected from Common Crawl,5 a large-scale sample of the Web, the 19 most frequent TLDs: .com, .org, .net, .info, .xyz, .club, .biz, .top, .edu, .online, .pro, .site, .vip, .icu, .buzz, .app, .asia, .gov, .space, excluding the country code TLD (ccTLD) because they indicate geographic origin, not website content. We represent this feature with a one-hot encoding vector of 19 dimensions.","EPFL, Switzerland","nlp/text-classification, web-site-classification",,,,,
|
12 |
+
"Johannes Zirngibl, Steffen Deusch, Patrick Sattler, Juliane Aulbach, Georg Carle, Mattijs Jonker – Technical University of Munich, Germany; University of Twente, The Netherlands","Domain Parking: Largely Present, Rarely Considered!",https://mediatum.ub.tum.de/1661842,papers,20220101Z00:00:00,,"Domain parking typically involves leveraging advertisements to generate revenue on otherwise inactive domain names. Their content is rarely of real value to users and tends to be highly similar across parked domains. They have commonalities beyond content alone: parked domains can share hosting and DNS infrastructure. Parking rarely receives special treatment in existing studies (e.g., content analyses or infrastructure concentration studies). While the presence and possible bias introduced by parked pages is sometimes acknowledged in studies, the studies still treat parked domains as any other, either because differentiation is infeasible, or because doing so is considered out-of-scope. We argue that the impact of parked domains on analyses regarding the current state and future development of the Internet should not be overlooked. In this paper, we motivate this argument through quantification, and take steps towards helping other researchers identify parked domains. We systematically collect a list of 82 parking services and develop DNS-based indicators to help identify parked domains. We next quantify the presence of parked domains, using large-scale DNS data containing hundreds of millions of registered domain names, representative for a significant part of the global DNS namespace. Overall, we pinpoint 60 M parked domains, which is a significant percentage of all names under consideration (23 %) and identify up to 4 % of domains from top lists to be parked. These findings demonstrate that the effect of parked pages is potentially pronounced. We also break down into the various parking services and DNS zones. This helps us demonstrate and further discuss the effect that domain parking can have on research and Internet consolidation.","Technical University of Munich, Germany; University of Twente, The Netherlands","web-science, internet/DNS, internet/domain-parking","Common Crawl While visual identification allowed us to validate the inferences to a reasonable extent, we wanted to upscale validation. Therefore, we consider Common Crawl (CC) data [21] [C. Crawl. (2022) The Common Crawl Corpus. [Online]. Available: https://commoncrawl.org/] and calculate the similarity of pages. Common Crawl is an open repository of web crawl data, collected at monthly intervals, accounting for hundreds of millions of unique domain names, and many more URLs. We consider CC data for Jan 2022 and the ∼60 M parked domains that we identify on Jan 28th, 2022. We extract the HTML content of parked pages from CC data, only considering URLs that contain exactly the registered domain. Furthermore, we require the crawl target to have been the landing page (i.e., the path of the URL is /) and also to have resulted in a useful response (i.e., HTTP status code of 200). Given these filters, ∼1.29 M HTML rich responses can be obtained. We extract visible text and tokenize it into words, remove stop words, apply lemmatization, and create a vector for the most-frequently used words for each page.",,,,
|
13 |
+
"Alexandra Sasha Luccioni, Frances Corry, Hamsini Sridharan, Mike Ananny, Jason Schultz, Kate Crawford – Hugging Face; University of Southern California, USA; New York University, USA; Microsoft Research, USA","A Framework for Deprecating Datasets: Standardizing Documentation, Identification, and Communication",https://doi.org/10.1145/3531146.3533086,papers,20220101Z00:00:00,"datasets, data stewardship data management dataset deprecation","Datasets are central to training machine learning (ML) models. The ML community has recently made significant improvements to data stewardship and documentation practices across the model development life cycle. However, the act of deprecating, or deleting, datasets has been largely overlooked, and there are currently no standardized approaches for structuring this stage of the dataset life cycle. In this paper, we study the practice of dataset deprecation in ML, identify several cases of datasets that continued to circulate despite having been deprecated, and describe the different technical, legal, ethical, and organizational issues raised by such continuations. We then propose a Dataset Deprecation Framework that includes considerations of risk, mitigation of impact, appeal mechanisms, timeline, post-deprecation protocols, and publication checks that can be adapted and implemented by the ML community. Finally, we propose creating a centralized, sustainable repository system for archiving datasets, tracking dataset modifications or deprecations, and facilitating practices of care and stewardship that can be integrated into research and publication processes.","Hugging Face; University of Southern California, USA; New York University, USA; Microsoft Research, USA","ai/ethics-of-machine-learning, nlp/text-corpora, nlp/corpus-construction, cc-cited-not-used","When it comes to filtering large text datasets scraped from the Web, given their sheer size (C4 represents 2.3 TB of data, whereas the Common Crawl has 139TB), filtering them is complex and time-consuming, although approaches have been proposed for reducing duplicates and train-test overlap [53]. [...] In practice, documenting and deprecating these datasets is akin to a game of whack-a-mole, since new versions of the Common Crawl come out every few months. Analyzing what they contain and their degrees of contamination through common evaluation tasks would take significant effort.",,,,
|
14 |
+
"Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera, Annette Rios, Isabel Papadimitriou, Salomey Osei, Pedro Ortiz Suarez, Iroro Orife, Kelechi Ogueji, Andre Niyongabo Rubungo, Toan Q. Nguyen, Mathias Müller, André Müller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine Çabuk Ballı, Stella Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, Mofetoluwa Adeyemi – Google Research; Masakhane NLP; Turkic Interlingua; Haverford College; RobotsMali; Intel Labs; University of Zambia; Google; AIMS-AMMI; Inria; University of Zurich; Stanford University; Kwame Nkrumah University of Science and Technology; Sorbonne Université; Niger-Volta LTI; University of Waterloo; University of Electronic Science and Technology of China; University of Notre Dame; Bayero University Kano; University of South Florida; Hugging Face; Jacobs University Bremen; University of Moratuwa; EleutherAI; Obafemi Awolowo University; University of Ibadan; Instadeep; University of Maryland; Defence Space Administration Abuja",Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets,https://doi.org/10.1162/tacl\_a\_00447,papers,20220101Z00:00:00,,"{With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, Web-mined text datasets covering hundreds of languages. We manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4). Lower-resource corpora have systematic issues: At least 15 corpora have no usable text, and a significant fraction contains less than 50\\% sentences of acceptable quality. In addition, many are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-proficient speakers, and supplement the human audit with automatic analyses. Finally, we recommend techniques to evaluate and improve multilingual corpora and discuss potential risks that come with low-quality data releases.}",Google Research; Masakhane NLP; Turkic Interlingua; Haverford College; RobotsMali; Intel Labs; University of Zambia; Google; AIMS-AMMI; Inria; University of Zurich; Stanford University; Kwame Nkrumah University of Science and Technology; Sorbonne Université; Niger-Volta LTI; University of Waterloo; University of Electronic Science and Technology of China; University of Notre Dame; Bayero University Kano; University of South Florida; Hugging Face; Jacobs University Bremen; University of Moratuwa; EleutherAI; Obafemi Awolowo University; University of Ibadan; Instadeep; University of Maryland; Defence Space Administration Abuja,"nlp/corpus-construction, nlp/web-as-corpus, nlp/parallel-corpus, nlp/low-resource-language","We selected the corpora for their multilinguality and the inclusion of understudied languages in NLP. With the exception of WikiMatrix and Paracrawl, all corpora are derived from CommonCrawl, and distinguish themselves by the choice of filtering methods, LangID and automatic alignment technology.",,"CCAligned-2020, Tensorflow-C4-Multilingual, OSCAR",,
|
15 |
+
"Julien Abadji, Pedro Ortiz Suarez, Laurent Romary, Benoît Sagot – Inria, France; Sorbonne Université, France",Towards a Cleaner Document-Oriented Multilingual Crawled Corpus,https://arxiv.org/abs/2201.06642,papers,20220101Z00:00:00,,"The need for raw large raw corpora has dramatically increased in recent years with the introduction of transfer learning and semi-supervised learning methods to Natural Language Processing. And while there have been some recent attempts to manually curate the amount of data necessary to train large language models, the main way to obtain this data is still through automatic web crawling. In this paper we take the existing multilingual web corpus OSCAR and its pipeline Ungoliant that extracts and classifies data from Common Crawl at the line level, and propose a set of improvements and automatic annotations in order to produce a new document-oriented version of OSCAR that could prove more suitable to pre-train large generative language models as well as hopefully other applications in Natural Language Processing and Digital Humanities.","Inria, France; Sorbonne Université, France","nlp/corpus-construction, nlp/web-as-corpus",,,OSCAR,,
|
16 |
+
"Wang Tongjing, Zhao Yin, Ziyu Bao, Evert Meijers – Utrecht University, The Netherlands; Delft University of Technology, The Netherlands",Dataset of intercity relationships between 293 Chinese cities extracted and classified on the basis of toponym co-occurrences on Common Crawl,https://www.researchgate.net/profile/Evert-Meijers/publication/362952059_Dataset_of_intercity_relationships_between_293_Chinese_cities_extracted_and_classified_on_the_basis_of_toponym_co-occurrences_on_Common_Crawl/links/6308bfc25eed5e4bd11f7938/Dataset-of-intercity-relationships-between-293-Chinese-cities-extracted-and-classified-on-the-basis-of-toponym-co-occurrences-on-Common-Crawl.pdf,papers,20220101Z00:00:00,"city networks, toponym co-occurrence, city relationship, geographical information retrieval","Although the importance of intercity relationships is theoretically acknowledged for cities’ socioeconomic development, the availability of such relational data often limits relevant urban studies. One of the new approaches of collecting city relational data is to extract the co-appearance of their place names from web texts. However, dealing with a gigantic web corpus is difficult for domain researchers given the complexities of processing terabytes of raw data. This paper develops an efficient and easy-to-follow method to extract a dataset of intercity relationships between 293 large Chinese cities applying the toponym co-occurrence method to a web archive. Our method successfully filters a 6.98 TB CC data set into a 202 GB single language text corpus. A highly-scalable Hadoop- based framework processes the full CC corpus utilizing a 1080 CPU cluster on the Amazon Elastic Map/Reduce infrastructure. To reveal more details of the intercity relationships, the intercity relationships are further classified into six categories: industry, information technology (IT), finance, research, culture, and government.","Utrecht University, The Netherlands; Delft University of Technology, The Netherlands","information retrieval, toponymy, dataset-creation",The data was retrieved from a Common Crawl raw corpus through a series of data processing. The web pages in this corpus that do not contain Chinese characteristics or Chinese placenames were filtered out based on keyword selection. The filtered Chinese corpus was 202 GB and the filtered Chinese corpus with placenames was about 139.5GB. Then we count the number of web pages where two city names co-appear. These intercity relationships were further classified into six categories using a lexicon-based classification method.,CC-MAIN-2019-18 (WET),,,
|
17 |
+
"Per E Kummervold, Freddy Wetjen, Javier de la Rosa – National Library of Norway (NLN), Norway",The Norwegian Colossal Corpus: A Text Corpus for Training Large Norwegian Language Models,http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.410.pdf,papers,20220101Z00:00:00,,,"National Library of Norway (NLN), Norway",nlp/corpus-construction,"Common Crawl (2022) is a non-profit organization that has been collecting data from the web and providing these archives to the public since 2011. Common Crawl-based datasets are popular for training transformer models and are the basis for the enormous 800GB Pile dataset (Gao, 2020), among others. There are extracted Norwegian datasets that are also based on Common Crawl. The Open Super-large Crawled Aggregated coRpus (OSCAR) (Suárez et al., 2019) contains 4.7GB (800M words) of Norwegian Bokmål and 54MB (9M words) of Norwegian Nynorsk. Using a cleaned version of Common Crawl, Google compiled a multilingual version of their English colossal corpus, called MC4 (2022), for training their mT5 model (Xue et al., 2020). The Norwegian part of that dataset is roughly 94GB (14B words). Both OSCAR and the MC4 datasets have been made available on Hugging Face (2022). Unfortunately, their respective licenses do not allow for redistribution within the NCC. To overcome this limitation, we are releasing scripts for the preparation, cleaning, deduplication, and formatting of these datasets, so they can be interleaved 3855with the NCC. By combining NCC with OSCAR and MC4, it should be possible to create a deduplicated Norwegian corpus with over 100GB of text (15B words).",,,OSCAR,
|
18 |
+
"Hanlin Li, Nicholas Vincent – Northwestern University, USA",Rethinking Data Governance: A Labor-Oriented Approach,https://criticalautomation.org/wp-content/uploads/2022/03/li-vincent-data-governance.pdf,papers,20220101Z00:00:00,,"The current data governance paradigm in which technology companies solely decide how user data is collected and used has introduced many issues to the tech sector. Prominent examples include information asymmetry about user data’s value, monopolistic practices enabled by data’s network effects, and power imbalance with respect to data aggregation and analysis. This work explicates how viewing users’ data-generating activities through a labor lens can help to mitigate these issues and provides corresponding design and research directions.","Northwestern University, USA","dataset-creation, data governance, user-generated content, artificial intelligence, machine learning, cc-cited-not-used","2.1 Information asymmetry about user data's value¶ The lack of transparency about user data's value helps make it possible for operators of for-profit computing systems to monetize user data and reap the bulk of its financial benefits. Currently, there exists a substantial gap between what data-driven technology companies know about user data's value and what users themselves do. For example, while social media platforms are well aware of the amount of financial benefits of user engagement, users do not have a window into how their collective attention and knowledge powers such businesses. This information asymmetry is further exacerbated by the fact that the vast majority of data that users produce during their interaction with modern technologies is rarely visible to themselves and is used downstream without their awareness and consent. For instance, the rise of AI technologies is possible largely due to the abundance of data unwittingly generated by the public for purposes other than enabling AI models. Prominent examples include Flickr photos [12], Wikipedia articles [14], and the Common Crawl dataset consisting of publicly available webpages [11]. In many of such cases, users produce data without being aware of its value and potential, giving technology companies the opportunity to extract an enormous amount of revenue from such data.",,,,
|
19 |
+
"Jiameng Pu, Zain Sarwar, Sifat Muhammad Abdullah, Abdullah Rehman, Yoonjin Kim, Parantapa Bhattacharya, Mobin Javed, Bimal Viswanath, Virginia Tech, LUMS Pakistan – Virginia Tech, USA; University of Chicago, USA; LUMS, Pakistan, University of Virginia, USA",Deepfake Text Detection: Limitations and Opportunities,https://jmpu.github.io/files/Deepfake%20Text%20Detection%20Limitations%20and%20Opportunities_CR.pdf,papers,20220101Z00:00:00,,,"Virginia Tech, USA; University of Chicago, USA; LUMS, Pakistan, University of Virginia, USA","nlp/text-classification, deep-fake-detection, misinformation, disinformation",,,,Grover-RealNews,
|
20 |
+
"Florian Hantke, Ben Stock – CISPA Helmholtz Center for Information Security, Germany",HTML Violations and Where to Find Them: A Longitudinal Analysis of Specification Violations in HTML,https://swag.cispa.saarland/papers/hantke2022violations.pdf,papers,20220101Z00:00:00,,,"CISPA Helmholtz Center for Information Security, Germany","web-science, internet-security","[...] we leveraged Common Crawl [22] to analyze more than 23K popular domains over the course of eight years. [...] the crawler framework first collects meta information for each of the listed domains using Common Crawl [22] as a basis for the following analyses (1). This Common Crawl approach makes it possible to take a look into the past and analyze old versions of websites as well as current snapshots. Unlike similar crawling studies before using the Internet Archive[32], with Common Crawl, we are not limited by rate limit issues as we can request the database and S3 bucket directly. This makes the process fast and enables to analyze nearly a thousand pages per minute from one IP address over multiple days. The meta information that the framework collects contains details on where an HTML document can be found in the Common Crawl’s dumps. For each domain, the framework collects meta information from up to 100 pages and hands them to the crawler.",,,,
|
21 |
+
"Todor Markov, Chong Zhang, Sandhini Agarwal, Tyna Eloundou, Teddy Lee, Steven Adler, Angela Jiang, Lilian Weng – OpenAI",A Holistic Approach to Undesired Content Detection in the Real World,https://arxiv.org/abs/2208.03274,papers,20220101Z00:00:00,,,OpenAI,"nlp/text-classification, nlp/corpus-construction, toxic content, hate speech",,,,,
|
22 |
+
"Joshua Reynolds, Adam Bates, Michael Bailey – New Mexico State University, USA; University of Illinois at Urbana-Champaign, USA; Georgia Institute of Technology, USA",Equivocal URLs: Understanding the Fragmented Space of URL Parser Implementations,https://link.springer.com/chapter/10.1007/978-3-031-17143-7_9,papers,20220101Z00:00:00,,,"New Mexico State University, USA; University of Illinois at Urbana-Champaign, USA; Georgia Institute of Technology, USA","computer-security/internet-security, web-security, URL parsing","We also surveyed ∼350 million URLs sampled uniformly and randomly from the approximately 3 billion URLs in Common Crawl's January 2022 URL Index [35]. [35 Kreymer, I., Chuang, G.: Announcing the common crawl index! (2015)]",,,,
|
23 |
+
"Mehmet Korkmaz, Emre Koçyiğit, Özgür Şahingöz, Banu Diri – Yildiz Technical University, Istanbul, Turkey; Biruni University, Istanbul, Turkey",A Hybrid Phishing Detection System Using Deep Learning-based URL and Content Analysis,https://www.eejournal.ktu.lt/index.php/elt/article/download/31197/15556,papers,20220101Z00:00:00,,,"Yildiz Technical University, Istanbul, Turkey; Biruni University, Istanbul, Turkey",computer-security/internet-security,,,,,
|
24 |
+
"Mohd Faizal Ab Razak, Mohd Izham Jaya, Ferda Ernawan, Ahmad Firdaus, Fajar Agung Nugroho – Universitas Dian Nuswantoro, Semarang, Indonesia",Comparative Analysis of Machine Learning Classifiers for Phishing Detection,https://ieeexplore.ieee.org/abstract/document/9930531/,papers,20220101Z00:00:00,,,"Universitas Dian Nuswantoro, Semarang, Indonesia",computer-security/internet-security,"… The source for this dataset is from the University Malaysia of Sarawak, compiled from PhishTank, OpenPhish, Alexa and Common Crawl. One method for detecting new phishing websites is to utilize heuristics such as the URL and CSS detection …",,,,
|
25 |
+
"L. Ranaldi, A. Nourbakhsh, F. Fallucchid, FM. Zanzotto – Guglielmo Marconi University, Roma, Italy; University of Rome Tor Vergata, Roma, Italy",C-OSINT: COVID-19 Open Source artificial INTelligence framework,https://ceur-ws.org/Vol-3260/paper16.pdf,papers,20220101Z00:00:00,,"With the emergence of COVID-19 disease worldwide, a market of the products related to this disease formed across the Internet. By the time these goods were in short supply, many uncontrolled Dark Web Marketplaces (DWM) were active in selling these products. At the same time, Dark Web Forums (DWF) became proxies for spreading false ideas, fake news about COVID-19, and advertising products sold in DWMs. This study investigates the activities entertained in the DWMs and DWFs to propose a learning-based model to distinguish them from their related counterparts on the surface web. To this end, we propose a COVID-19 Open Source artificial INTelligence framework (C-OSINT) to automatically collect and classify the activities done in DWMs and DWFs. Moreover, we corporate linguistic and stylistic solutions to leverage the classification performance between the content found in DWMs and DWFs and two surface web sources. Our results show that using syntactic and stylistic representation outperforms the Transformer based results over these domains.","Guglielmo Marconi University, Roma, Italy; University of Rome Tor Vergata, Roma, Italy",nlp/transformer-language-model; web-science/dark-web,,,,,
|
26 |
+
"Shuheng Liu, Alan Ritter – Georgia Institute of Technology",Do CoNLL-2003 Named Entity Taggers Still Work Well in 2023?,https://arxiv.org/abs/2212.09747,papers,20220101Z00:00:00,,,Georgia Institute of Technology,"nlp/named-entity-recognition, dataset-creation","Our dataset follows this distribution to collect Reuters news articles published between December 5th and 7th, 2020, collected from the Common Crawl Foundation³. [³http://commoncrawl.org/]",,,,
|
27 |
+
"Matyáš Boháček, Michal Bravanský, Filip Trhlík, Václav Moravec – Charles University, Prague, Czech Republic; Gymnasium of Johannes Kepler, Prague, Czech Republic; University College London, United Kingdom",Fine-grained Czech News Article Dataset: An Interdisciplinary Approach to Trustworthiness Analysis,https://arxiv.org/abs/2212.08550,papers,20220101Z00:00:00,,,"Charles University, Prague, Czech Republic; Gymnasium of Johannes Kepler, Prague, Czech Republic; University College London, United Kingdom","nlp/fake-news-detection, dataset-creation","Initially, we assembled a collection of almost 94, 000 articles by scraping URLs of 45 Czech news sources obtained from Common Crawl² [²https://commoncrawl.org/]. These sources included mainstream journalistic websites, tabloids, independent news outlets, and websites that are part of the disinformation ecosystem [ 26 ], capturing the full scope of journalistic content in the Czech Republic. [...] We applied multiple filters and balancing mechanisms to mitigate deficiencies caused by inherent flaws in Common Crawl, which reduced the dataset’s size from 94, 000 to 10, 000 items. This way, we also ensured that the data is as representative of the Czech news ecosystem and as diverse as possible.",,,,
|
28 |
+
"Mehtab Khan, Alex Hanna – Yale Law School, USA; Distributed AI Research Institute",The Subjects and Stages of AI Dataset Development: A Framework for Dataset Accountability,https://ssrn.com/abstract=4217148,papers,20220101Z00:00:00,,"There has been increased attention toward the datasets that are used to train and build AI technologies from the computer science and social science research communities, but less from legal scholarship. Both Large-Scale Language Datasets (LSLDs) and Large-Scale Computer Vision Datasets (LSCVDs) have been at the forefront of such discussions, due to recent controversies involving the use of facial recognition technologies, and the discussion of the use of publicly-available text for the training of massive models which generate human-like text. Many of these datasets serve as “benchmarks” to develop models that are used both in academic and industry research, while others are used solely for training models. The process of developing LSLDs and LSCVDs is complex and contextual, involving dozens of decisions about what kinds of data to collect, label, and train a model on, as well as how to make the data available to other researchers. However, little attention has been paid to mapping and consolidating the legal issues that arise at different stages of this process: when the data is being collected, after the data is used to build and evaluate models and applications, and how that data is distributed more widely. In this article, we offer four main contributions. First, we describe what kinds of objects these datasets are, how many different kinds exist, what types of modalities they encompass, and why they are important. Second, we provide more clarity about the stages of dataset development – a process that has thus far been subsumed within broader discussions about bias and discrimination – and the subjects who may be susceptible to harms at each point of development. Third, we provide a matrix of both the stages of dataset development and the subjects of dataset development, which traces the connections between stages and subjects. Fourth, we use this analysis to identify some basic legal issues that arise at the various stages in order to foster a better understanding of the dilemmas and tensions that arise at every stage. We situate our discussion within wider discussion of current debates and proposals related to algorithmic accountability. This paper fulfills an essential gap when it comes to comprehending the complicated landscape of legal issues connected to datasets and the gigantic AI models trained on them.","Yale Law School, USA; Distributed AI Research Institute","nlp/corpus-construction, dataset-creation, data-governance, privacy, legal/copyright",D. Common Crawl: Archiving the Whole Web The Common Crawl (CC) dataset is one of the most popular datasets used in the training of what have typically been called large language models. [...],,,,
|
29 |
+
"Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, Jenia Jitsev – LAION; UC Berkeley, USA; Gentec Data; TU Darmstadt, Germany; Hessian.AI; University of Washington, Seattle, USA; Technical University of Munich, Germany; Stability AI; EleutherAI; Juelich Supercomputing Center (JSC), Germany; Research Center Juelich (FZJ), Germany",LAION-5B: An open large-scale dataset for training next generation image-text models,https://arxiv.org/abs/2210.08402,papers,20220101Z00:00:00,,,"LAION; UC Berkeley, USA; Gentec Data; TU Darmstadt, Germany; Hessian.AI; University of Washington, Seattle, USA; Technical University of Munich, Germany; Stability AI; EleutherAI; Juelich Supercomputing Center (JSC), Germany; Research Center Juelich (FZJ), Germany","nlp/corpus-construction, nlp/multimodal-corpora","By starting from Common Crawl [1] and filtering this data source with an existing CLIP model, we derive a dataset consisting of three parts: 2.32 billion English image-text examples, 2.26 billion multilingual examples, and 1.27 billion examples that are not specific to a particular language (e.g., places, products, etc.). [...] To extract image-text pairs from Common Crawl, we parse the HTML IMG (image) tags from Common Crawl’s WAT metadata files.⁴ [⁴See https://commoncrawl.org/the-data/get-started/ for details of the metadata format.] Specifically, we focus on images with an alt-text so we can create image-text pair.",,LAION-5B,,
|
30 |
+
"{NLLB Team}, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Jeff Wang – Meta AI; UC Berkeley, USA; Johns Hopkins University, USA",No Language Left Behind: Scaling Human-Centered Machine Translation,https://arxiv.org/abs/2207.04672,papers,20220101Z00:00:00,,"Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system. Finally, we open source all contributions described in this work, accessible at https://github.com/facebookresearch/fairseq/tree/nllb.","Meta AI; UC Berkeley, USA; Johns Hopkins University, USA","nlp/corpus-construction, nlp/parallel-corpus, nlp/low-resource-language, nlp/language-identification","We begin with web data as our starting point, provided by CommonCrawl (CC)18 and ParaCrawl (Bañón et al., 2020).",,NLLB,,
|
31 |
+
"Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zhang, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, Bryan Catanzaro – Microsoft; NVIDIA","Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model",https://arxiv.org/abs/2201.11990,papers,20220101Z00:00:00,"Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences",,Microsoft; NVIDIA,nlp/language-model,"Resources such as Common Crawl (CC) provide snapshots of the web which can be utilized as a source of language data. While these data sources contain an enormous amount of language data, they also require carefully designed preprocessing steps in order to select data which is of reasonable quality. As prior work has found (e.g., [9]), the quality of unfiltered Common Crawl data is lower than that of curated datasets and steps should be taken to increase the average quality of data selected from Common Crawl for LM pretraining. [...] Common Crawl: As mentioned previously, Common Crawl comprises an immense amount of data. We chose to process two snapshots, 2020-50 and 2021-04, with the aim of acquiring around 150B tokens of training data. The first step of this process is language detection [11] and text extraction from the raw HTML included in the Common Crawl WARC files¹. Following the rationale presented in [11], we used the pycld2² and jusText³ libraries for these tasks. [...] In addition to Common Crawl data, we leveraged a number of other previously generated datasets. From The Pile, we selected Books3, OpenWebText2, Stack Exchange, PubMed Abstracts, Wikipedia, Gutenberg (PG-19), BookCorpus2, NIH ExPorter, and Pile-CC datasets. We also included the CC-Stories and RealNews datasets used to train Megatron [63].",,,,
|
32 |
+
"Tom Alby, Robert Jäschke – Humboldt-Universität zu Berlin, Berlin, Germany",Analyzing the Web: Are Top Websites Lists a Good Choice for Research?,https://link.springer.com/chapter/10.1007/978-3-031-16802-4_2,papers,20220101Z00:00:00,,"The web has been a subject of research since its beginning, but it is difficult if not impossible to analyze the whole web, even if a database of all URLs would be freely accessible. Hundreds of studies have used commercial top websites lists as a shortcut, in particular the Alexa One Million Top Sites list. However, apart from the fact that Amazon decided to terminate Alexa, we question the usefulness of such lists for research as they have several shortcomings. Our analysis shows that top sites lists miss frequently visited websites and offer only little value for language-specific research. We present a heuristic-driven alternative based on the Common Crawl host-level web graph while also taking language-specific requirements into account.","Humboldt-Universität zu Berlin, Berlin, Germany","web-science, domain-ranking",,hyperlinkgraph/cc-main-2021-feb-apr-may/hostgraph,,,
|
33 |
+
"Olexandra Belz – Ivan Franko National University of Lviv, Ukraine",Use of schema.org micro-markup in e-commerce projects,http://baltijapublishing.lv/index.php/threeseas/article/view/1964/1973,papers,20220101Z00:00:00,,"The purpose of the article is to identify the most effective schema.org micro-markup schemes used in e-commerce projects. Methodology. The research included competitive intelligence among the leading online platforms operating in Europe in general and in Ukraine in particular. The study involved TOP-8 e-commerce projects in Ukraine and TOP-9 global cross-border marketplaces operating in Europe. The service validator.schema.org was chosen as the research tool. Results. The study showed that the most popular schema.org micro-markup format is JSON-LD. In general, 82.4% of the surveyed sites use JSON-LD microdata format. Some sites use two microdata formats: JSON-LD and Microdata. But none of the top online marketplaces use the RDFa micro-markup format. Popular marketplaces operating in Ukraine and Europe often use the same types of schema.org vocabulary. However, the frequency of using micro-markup by top marketplaces operating in Ukraine is much higher than the frequency of using micro-markup by top marketplaces operating in Europe. In addition, Ukrainian marketplaces use a much wider list of schema.org micro-markup properties than marketplaces operating in Europe. However, no online store has implemented the properties of advantages and disadvantages of goods recommended by Google in the scheme. Practical implications. The study suggests schema.org micro-markup schemes for homepage, category page, product page, about page, payment and delivery page, warranty and returns page, contact page and blog. The proposed templates of micro-markup schemes were validated using the validator.schema.org service. The study recommends using the JSON-LD format for semantic markup of website content. Value/originality. Implementation of effective semantic markup of site content will allow search engines to more accurately identify the information presented on the site. This, in turn, will improve the visibility of the online marketplace in the Search Engine Results Page of Google, Bing, Yahoo! etc.","Ivan Franko National University of Lviv, Ukraine","e-commerce, online marketplaces, linked data, schema.org annotations, SEO","Since 2008, the Common Crawl project has been crawling websites to collect web page data (extracting metadata and web page text). At the time of writing, the latest scan took place from November 26 to December 10, 2022. As a result of this scan, 3.35 billion web pages were processed and 420 petabytes of content were removed (Common Crawl, 2022). Both scientists and practitioners are working with the obtained data sets of the Common Crawl project.¶ On September 22, 2022, the Web Data Commons (WDC) project released the Schema.org Table Annotation Benchmark (SOTAB) for public download (Web Data Commons, 2022).",,,WebDataCommons,
|
34 |
+
"Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, Saehoon Kim – Kakao Brain, South Korea",Coyo-700m: Image-text pair dataset,https://github.com/kakaobrain/coyo-dataset,papers,20220101Z00:00:00,,We collected about 10 billion pairs of alt-text and image source in HTML documents in Common Crawl from Oct. 2020 to Aug. 2021. and eliminated uninformative pairs through the image and text level filtering process with minimal cost. The following figure outlines our data collection procedure.,"Kakao Brain, South Korea",nlp/multimodal-corpora,,"five CommonCrawl dumps, ranging from 2017 to 2020",COYO-700M,,
|
35 |
+
"Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, Quoc Le – Google",LaMDA: Language Models for Dialog Applications,https://arxiv.org/abs/2201.08239,papers,20220101Z00:00:00,,,Google,"nlp/language-model, nlp/transformer-language-model","E Pre-training data composition¶ The pre-training data, called Infiniset, is a combination of dialog data from public dialog data and other public web documents. It consists of 2.97B documents and 1.12B dialogs with 13.39B utterances. The composition of the data is as follows: 50% dialogs data from public forums; 12.5% C4 data [11]; 12.5% code documents from sites related to programming like Q&A sites, tutorials, etc; 12.5% Wikipedia (English); 6.25% English web documents; and 6.25% Non-English web documents. The total number of words in the dataset is 1.56T. Note that this composition was chosen to achieve a more robust performance on dialog tasks (Section 4) while still keeping its ability to perform other tasks like code generation. As future work, we can study how the choice of this composition may affect the quality of some of the other NLP tasks performed by the model.",,,Tensorflow-C4,
|
36 |
+
"Mark Edward Phillips, Sawood Alam – University of North Texas, USA; Internet Archive, USA",Moving the End of Term Web Archive to the Cloud to Encourage Research Use and Reuse,https://digital.library.unt.edu/ark:/67531/metadc1998717/m2/1/high_res_d/EOT_WADL_2022.pdf,papers,20220101Z00:00:00,,"The End of Term Web (EOT) Archive is a collaborative project with a goal of collecting the United States federal web, loosely defined as .gov and .mil, every four years coinciding with presidential elections and often a transition in the Executive Branch of the government. In 2021 the End of Term team began to process the longitudinal web archive for EOT-2008, EOT-2012, EOT-2016, and EOT-2020 to move into the Amazon S3 storage service as part of the Amazon Open Data Program. This effort adopted tools, structures, and documentation developed by Common Crawl in an effort to maximize potential research access and reuse of existing tools and documentation. This paper presents the process of organizing, staging, processing, and moving these collections into the Amazon cloud.","University of North Texas, USA; Internet Archive, USA",web archive,,,,,
|
37 |
+
"Gilles Adda, Annelies Braffort, Ioana Vasilescu, François Yvon – Université Paris-Saclay, CNRS, LISN, Paris, France",Deliverable D1.14 Report on the French Language. European Language Equality (ELE); EU project no. LC- 01641480 – 101018166,https://european-language-equality.eu/wp-content/uploads/2022/03/ELE___Deliverable_D1_14__Language_Report_French_.pdf,papers,20220101Z00:00:00,,,"Université Paris-Saclay, CNRS, LISN, Paris, France","nlp/resources, French, nlp/language-models, nlp/text-corpora","The CommonCrawl project³⁷ [³⁷https://commoncrawl.org/] aggregates Web crawled data that is orders or magnitude larger than these resources for many languages; furthermore this corpus is being updated on a regular basis. By using parts of the French subset of CommonCrawl, possibly conjoined with the more curated corpora alluded to above has enabled to train large-scale BERT-style Language Models (LMs) – FlauBERT (Le et al., 2020) is built with a corpus containing about 12B running words, CamemBERT (Martin et al., 2020) uses the 22B words OSCAR, and these numbers continue to grow, albeit at a much slower pace than the corresponding English cor- pora.",,,,
|
commoncrawl_citations_annotated_2023.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
commoncrawl_citations_annotated_2024.csv
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited
|
2 |
+
"Xian Gong, Paul X. Mccarthy, Marian-Andrei Rizoiu, Paolo Boldi – University of Technology, Australia; University of New South Wales, Australia; Università degli Studi di Milano, Italy",Harmony in the Australian Domain Space,https://doi.org/10.1145/3614419.3643998,papers,20240101Z00:00:00,,"In this paper we use for the first time a systematic approach in the study of harmonic centrality at a Web domain level, and gather a number of significant new findings about the Australian web. In particular, we explore the relationship between economic diversity at the firm level and the structure of the Web within the Australian domain space, using harmonic centrality as the main structural feature. The distribution of harmonic centrality values is analyzed over time, and we find that the distributions exhibit a consistent pattern across the different years. The observed distribution is well captured by a partition of the domain space into six clusters; the temporal movement of domain names across these six positions yields insights into the Australian Domain Space and exhibits correlations with other non-structural characteristics. From a more global perspective, we find a significant correlation between the median harmonic centrality of all domains in each OECD country and one measure of global trust, the WJP Rule of Law Index. Further investigation demonstrates that 35 countries in OECD share similar harmonic centrality distributions. The observed homogeneity in distribution presents a compelling avenue for exploration, potentially unveiling critical corporate, regional, or national insights.","University of Technology, Australia; University of New South Wales, Australia; Università degli Studi di Milano, Italy",,"There are many public collections of web crawls, but one that is known for being very reliable and quite wide in scope is the Common Crawl1. Common Crawl’s measurements are preferred for web and network analysis due to their extensive coverage, regular updates, and large-scale, publicly accessible datasets, which reduces the need for resource-intensive data collection and is applicable across various research in a reproducible way. [...]",,,,
|
3 |
+
"Peter Carragher, Evan M. Williams, Kathleen M. Carley – Carnegie Mellon University, USA",Misinformation Resilient Search Rankings with Webgraph-based Interventions,https://doi.org/10.1145/3670410,papers,20240101Z00:00:00,"search engine optimization, misinformation, website reliability, pagerank","The proliferation of unreliable news domains on the internet has had wide-reaching negative impacts on society. We introduce and evaluate interventions aimed at reducing traffic to unreliable news domains from search engines while maintaining traffic to reliable domains. We build these interventions on the principles of fairness (penalize sites for what is in their control), generality (label/fact-check agnostic), targeted (increase the cost of adversarial behavior), and scalability (works at webscale). We refine our methods on small-scale webdata as a testbed and then generalize the interventions to a large-scale webgraph containing 93.9M domains and 1.6B edges. We demonstrate that our methods penalize unreliable domains far more than reliable domains in both settings and we explore multiple avenues to mitigate unintended effects on both the small-scale and large-scale webgraph experiments. These results indicate the potential of our approach to reduce the spread of misinformation and foster a more reliable online information ecosystem. This research contributes to the development of targeted strategies to enhance the trustworthiness and quality of search engine results, ultimately benefiting users and the broader digital community.","Carnegie Mellon University, USA","web-science/hyperlinkgraph, misinformation, disinformation, domain-ranking",,,,,
|
4 |
+
"Tommaso Fontana, Sebastiano Vigna, Stefano Zacchiroli – Inria, DGDI, Paris, France; Università degli Studi di Milano, Dipartimento di Informatica, Milan, Italy; LTCI, Télécom Paris, Institut Polytechnique de Paris, Palaiseau, France",WebGraph: The Next Generation (Is in Rust),https://doi.org/10.1145/3589335.3651581,papers,20240101Z00:00:00,,,"Inria, DGDI, Paris, France; Università degli Studi di Milano, Dipartimento di Informatica, Milan, Italy; LTCI, Télécom Paris, Institut Polytechnique de Paris, Palaiseau, France",web-science/hyperlinkgraph; graph-processing; programming-languages/Java; programming-languages/Rust; cc-cited-not-used,"Moreover, open data projects such as Common Crawl and Software Heritage (SWH) [5] have used WebGraph to compress and distribute their data.",,,,
|
5 |
+
"{Henry S} Thompson – The University of Edinburgh, Edinburgh, United Kingdom",Improved methodology for longitudinal Web analytics using Common Crawl,https://www.research.ed.ac.uk/en/publications/improved-methodology-for-longitudinal-web-analytics-using-common-,papers,20240101Z00:00:00,,"Common Crawl is a multi-petabyte longitudinal dataset containing over 100 billion web pages which is widely used as a source of language data for sequence model training and in web science research. Each of its constituent archives is on the order of 75TB in size. Using it for research, particularly longitudinal studies, which necessarily involve multiple archives, is therefore very expensive in terms of compute time and storage space and/or web bandwidth. Two new methods for mitigating this problem are presented here, based on exploiting and extending the much smaller (<200 gigabytes (GB) compressed) index which is available for each archive. By adding Last-Modified timestamps to the index we enable longitudinal exploration using only a single archive. By comparing the distribution of index features for each of the 100 segments into which archive is divided with their distribution over the whole archive, we have identified the least and most representative segments for a number of recent archives. Using this allows the segment(s) that are most representative of an archive to be used as proxies for the whole. We illustrate this approach in an analysis of changes in URI length over time, leading to an unanticipated insight into the how the creation of Web pages has changed over time.","The University of Edinburgh, Edinburgh, United Kingdom","web-archiving, web-dataset",,,,,
|