citations-annotated / commoncrawl_citations_annotated_2023.csv
fheilz's picture
Upload 14 files
504c1a0 verified
raw
history blame
No virus
131 kB
cc_project_author,post_title,cc_project_url,cc_project_category,post_date,keywords,abstract,cc_author_affiliation,cc_class,cc_snippet,cc_dataset_used,cc_derived_dataset_about,cc_derived_dataset_used,cc_derived_dataset_cited
"Asadullah Safi, Satwinder Singh – Nangarhar University, Afghanistan; Central University of Punjab, Bathinda, Punjab, India",A Systematic Literature Review on Phishing Website Detection Techniques,https://www.sciencedirect.com/science/article/pii/S1319157823000034,papers,20230101Z00:00:00,"Phishing, Phishing Detection, Deep Learning, Cyber Security, Machine Learning","Phishing is a fraud attempt in which an attacker acts as a trusted person or entity to obtain sensitive information from an internet user. In this Systematic Literature Survey (SLR), different phishing detection approaches, namely Lists Based, Visual Similarity, Heuristic, Machine Learning, and Deep Learning based techniques, are studied and compared. For this purpose, several algorithms, data sets, and techniques for phishing website detection are revealed with the proposed research questions. A systematic Literature survey was conducted on 80 scientific papers published in the last five years in research journals, conferences, leading workshops, the thesis of researchers, book chapters, and from high-rank websites. The work carried out in this study is an update in the previous systematic literature surveys with more focus on the latest trends in phishing detection techniques. This study enhances readers' understanding of different types of phishing website detection techniques, the data sets used, and the comparative performance of algorithms used. Machine Learning techniques have been applied the most, i.e., 57 as per studies, according to the SLR. In addition, the survey revealed that while gathering the data sets, researchers primarily accessed two sources: 53 studies accessed the PhishTank website (53 for the phishing data set) and 29 studies used Alexa's website for downloading legitimate data sets. Also, as per the literature survey, most studies used Machine Learning techniques; 31 used Random Forest Classifier. Finally, as per different studies, Convolution Neural Network (CNN) achieved the highest Accuracy, 99.98%, for detecting phishing websites.","Nangarhar University, Afghanistan; Central University of Punjab, Bathinda, Punjab, India","computer-security/internet-security, web-security","[phishing website detection research relying] Common Crawl (Rao et al., 2019); (Rashid et al., 2020) ; (Geyik et al., 2021) ; (Korkmaz and Sahingoz, 2020) ; (Chiew et al., 2019) ; (Feng and Yue, 2020) ; (Wei et al., 2020)",,,,
"Josh A. Goldstein, Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel, Katerina Sedova – Georgetown University’s Center for Security and Emerging Technology, USA; OpenAI; Stanford Internet Observatory, USA",Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations,https://arxiv.org/abs/2301.04246,papers,20230101Z00:00:00,"Computers and Society (cs.CY), FOS: Computer and information sciences, FOS: Computer and information sciences",,"Georgetown University’s Center for Security and Emerging Technology, USA; OpenAI; Stanford Internet Observatory, USA","nlp/generative-language-models, ai/ethics-of-machine-learning, cc-cited-not-used","While some of this data is typically taken from relatively structured sources such as Wikipedia, a large majority of data usually comes from tools like Common Crawl that scrape the web for publicly available text.¹⁴⁷ [147. CommonCrawl freely publishes its archives of web data. See “So you’re ready to get started.,” Common Crawl, accessed June 27, 2022, https://commoncrawl.org/the-data/get-started/. But anyone can build their own software for web scraping or use other tools to extract data from websites.]",,,,
"Xinyue Wang – Virginia Tech, USA",Large Web Archive Collection Infrastructure and Services,http://hdl.handle.net/10919/113345,papers,20230101Z00:00:00,,"The web has evolved to be the primary carrier of human knowledge during the information age. The ephemeral nature of much web content makes web knowledge preservation vital in preserving human knowledge and memories. Web archives are created to preserve the current web and make it available for future reuse. In addition to its preservation purpose, web archive data is also used as a source for research and for lost information discovery. However, the reuse of web archive data is inherently challenging because of the scale of data size and requirements of big data tools to serve and analyze web archive data efficiently. In this research, we propose to build a web archive big data processing infrastructure that can support efficient and scalable web archive reuse like quantitative data analysis and browsing services. We adopt industry frameworks and tools to establish a platform that can provide high-performance computation for web archive initiatives and users. We propose to convert the standard web archive data file format to a columnar data format for efficient future reuse. Our experiments show that our proposed design can significantly improve quantitative data analysis tasks for common web archive data usage. Our design can also serve an efficient web browsing service without adopting a sophisticated web hosting architecture. In addition to the standard web archive data, we also integrate Twitter data into our design as a unique web archive resource. Twitter is a prominent source of data for researchers in a variety of fields and an integral element of the web's history. We aggregate the Twitter data from different sources and integrate it into the suggested design for reuse. We are able to greatly increase the processing performance of workloads around social media data by overcoming the data loading bottleneck with a web-archive-like Parquet data format.","Virginia Tech, USA","web-archiving, data formats, big data, data processing, WARC, Parquet, CDX","We use Common Crawl’s web archiving data crawled from May 20 to 23, 2018. The data set consists of 1219 Gzip compressed WARC files totaling 0.98 TB, and contains 53,324,440 records. The WARC files are organized by crawling time, each containing records crawled from a mutually exclusive time span. We then reformat the WARC files to yield the following five datasets for comparison: 1) the original WARC files; 2) case 1 plus CDX index files built against all the original WARC files; 3) Parquet files containing the same information as case 1, with most columns in String type; 4) the same as case 3 but the Timestamp column in INT64 Timestamp type; 5) Avro, [...]",,,,
"Petros Terzis – University College London, United Kingdom",Building Programmable Commons,https://osf.io/preprints/socarxiv/yuef5/,papers,20230101Z00:00:00,,,"University College London, United Kingdom","digital-commons, public-commons, cc-cited-not-used","Programmable commons and the public value of programmability are thus introduced as parts of a broader political project that aspires to democratise access to, and management of these resources. By drawing on the history of a family of commons -namely intellectual commons, infrastructure commons, and global commons-, this paper explores the material form and impact of infocomputational technologies and presents a blend of bottom-up and top-down initiatives for their commons-based organisation and governance.",,,,
"Hans W. A. Hanley, Deepak Kumar, Zakir Durumeric &ndash; Stanford University, USA","A Golden Age: Conspiracy Theories' Relationship with Misinformation Outlets, News Media, and the Wider Internet",https://arxiv.org/abs/2301.10880,papers,20230101Z00:00:00,,"Do we live in a {""}Golden Age of Conspiracy Theories?{""} In the last few decades, conspiracy theories have proliferated on the Internet with some having dangerous real-world consequences. A large contingent of those who participated in the January 6th attack on the US Capitol believed fervently in the QAnon conspiracy theory. In this work, we study the relationships amongst five prominent conspiracy theories (QAnon, COVID, UFO/Aliens, 9-11, and Flat-Earth) and each of their respective relationships to the news media, both mainstream and fringe. Identifying and publishing a set of 755 different conspiracy theory websites dedicated to our five conspiracy theories, we find that each set often hyperlinks to the same external domains, with COVID and QAnon conspiracy theory websites largest amount of shared connections. Examining the role of news media, we further find that not only do outlets known for spreading misinformation hyperlink to our set of conspiracy theory websites more often than mainstream websites but this hyperlinking has increased dramatically between 2018 and 2021, with the advent of QAnon and the start of COVID-19 pandemic. Using partial Granger-causality, we uncover several positive correlative relationships between the hyperlinks from misinformation websites and the popularity of conspiracy theory websites, suggesting the prominent role that misinformation news outlets play in popularizing many conspiracy theories.","Stanford University, USA","nlp/fake-news-detection, misinformation, disinformation, conspiracy theories, web-science/hyperlinkgraph","Using our own web scrapes and pages historically scraped by Common Crawl,¹ [¹https://commoncrawl.org/] we then document the state and the changing behaviors of the conspiracy theory ecosystem and their relationship to a separate set of 530 known misinformation outlets, 565 authentic news websites, and 528 non-news websites. [...] Utilizing the Common Crawl harmonic and PageRank centrality measures that measure a website’s centrality across all of the crawled Internet, we then find many of the websites in our dataset have relatively high network centrality, suggesting that many of them are not peripheral on the Internet but actually near the Internet’s core/are mainstream. Indeed examining, the hyperlink connections between news media and these conspiracy theories, we find that many of them rely heavily on mainstream as well as misinformation outlets (compared to non-news websites) for their information, with many popular misinformation outlets also hyperlinking back to many of these conspiracy theory websites. [...] 4.1 Common Crawl Page Retrieval and Website Crawling To gather the set of hyperlinks between our websites, we utilize Common Crawl data [92]—widely considered the most complete publicly available source of web crawl data—and our own website crawls. For each website in our dataset, we collect all the domain’s HTML pages that were indexed by Common Crawl before August 2021. In addition to Common Crawl data, we further utilize our own website scrapes. We utilize our own crawls, in addition to Common Crawl, due to noisiness, missing pages, and missing domains within the Common Crawl dataset [85]. For example, 309 particularly small conspiracy theory domains were not contained within the Common Crawl dataset (i.e. these websites often only contained a few dozen pages). Thus for each website in our dataset, we further gather all the HTML pages 10 hops from each website’s homepage (i.e., we collect all URLs linked from the homepage (1st hop), then all URLs linked from the pages that were linked by the homepage (2nd hop), and so forth). For each HTML page from our scrapes and Common Crawl, we parse the HTML, detect the date that page was published, and collect hyperlinks to other pages (i.e., HTML <a> tags). Altogether we gather the available Common Crawl pages and scrape the HTML for our 755 conspiracy theory, 530 misinformation, 565 authentic news, and 528 non-news websites. [...] Utilizing Common Crawl network data [ 61] over the indexed Internet (87.7 million websites), we thus determine the network centrality of our set of conspiracy-focused websites to understand if each conspiracy theory website category is “core” (regularly utilized on the Internet) or “peripheral”. We utilize centralities across Common Crawl’s dataset rather than our partial one in order to get a sense of each conspiracy theory’s centrality on the entire Internet. While only 446 of our conspiracy theory websites are within the Common Crawl dataset, this analysis allows us to fully understand the relative roles that each conspiracy theory website group in our dataset plays on the wider Internet.",,,,
"Ralph Peeters, Reng Chiz Der, Christian Bizer &ndash; University of Mannheim, Germany",WDC Products: A Multi-Dimensional Entity Matching Benchmark,https://arxiv.org/abs/2301.09521,papers,20230101Z00:00:00,,,"University of Mannheim, Germany","semantic-web, semantic-web/microformats, e-commerce, linked data, schema.org annotations","The first step of the pipeline is the extraction of large amounts of product offers from the Common Crawl⁴ [⁴https://commoncrawl.org/] using schema.org annotations. Some product offers contain product identifiers like MPNs and GTINs which allow us to group offers into [...] The Web Data Commons6 project regularly extracts schema.org annotations from the Common Crawl, the largest web corpus available to the public, in order to monitor the adoption of semantic annotations on the Web and to provide the extracted data for public download. The WDC Products benchmark uses product offers from the WDC Product Data Corpus V2020 (PDC2020)7. The corpus was created by extracting schema.org product data from the September 2020 version of the Common Crawl. The extracted data goes through a pipeline of cleansing steps such as removing offers from listing pages as well as advertisements that are contained in a page in addition to the main offer [31]. The resulting PDC2020 corpus consists of ∼98 million product offers originating from 603,000 websites.",CC-MAIN-2020-40,,,
Xavier Amatriain &ndash; amatriain.net,Transformer models: an introduction and catalog,https://arxiv.org/abs/2302.07730,papers,20230101Z00:00:00,,,amatriain.net,"nlp/language-model, nlp/transformer-language-model, nlp/multi-modal-language-model",,,,,
"Nicholas Carlini, Matthew Jagielski, Christopher A. Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum Anderson, Andreas Terzis, Kurt Thomas, Florian Tramèr &ndash; Google; ETH Zurich, Switzerland; NVIDIA; Robust Intelligence",Poisoning Web-Scale Training Datasets is Practical,https://arxiv.org/abs/2302.10149,papers,20230101Z00:00:00,"Cryptography and Security (cs.CR), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences","Deep learning models are often trained on distributed, webscale datasets crawled from the internet. In this paper, we introduce two new dataset poisoning attacks that intentionally introduce malicious examples to a model's performance. Our attacks are immediately practical and could, today, poison 10 popular datasets. Our first attack, split-view poisoning, exploits the mutable nature of internet content to ensure a dataset annotator's initial view of the dataset differs from the view downloaded by subsequent clients. By exploiting specific invalid trust assumptions, we show how we could have poisoned 0.01% of the LAION-400M or COYO-700M datasets for just $60 USD. Our second attack, frontrunning poisoning, targets web-scale datasets that periodically snapshot crowd-sourced content -- such as Wikipedia -- where an attacker only needs a time-limited window to inject malicious examples. In light of both attacks, we notify the maintainers of each affected dataset and recommended several low-overhead defenses.","Google; ETH Zurich, Switzerland; NVIDIA; Robust Intelligence","nlp/corpus-construction, computer-security, nlp/language-model, nlp/transformer-language-model, nlp/multi-modal-language-model","B.3 Common Crawl Common Crawl is a petabyte-scale corpus of web crawl data that is repeatedly captured on a roughly monthly basis. Each archive is a complete re-crawl of the internet that records the full activity, including all requests of the crawler and the host responses—with both HTTP headers and content. As such, each archive contains a static snapshot of all crawled pages at the time of visit. This may include new page content not seen during a previous crawl, and may exclude content that has become stale since the previous crawl. For example, data crawled during September 24 through October 8, 2022 contains 3.15 billion web pages with 380 TiB of uncompressed content from 34 million registered domains—1.3 billion URLs were not visited in any of the prior crawls.¹⁴ The Common Crawl dataset is vulnerable to an attack which is similar to both our frontrunning and split-view poisoning attacks. The adversary can purchase an expired domain which was previously contained in the Common Crawl, and it will be re-crawled with the adversary’s choice of content, which will then appear in subsequent Common Crawl snap- shots. Notice that, differently from the snapshot-poisoning attack on Wikipedia, there is no content moderation here and so the adversary simply needs to continue to control the domain to poison all future Common Crawl snapshots. Buying recently-expired domains that existed in previous Common Crawl snapshots allows a stronger form of attack where the attack can inject entirely new links into the crawl. This can be accomplished by adding links or subdomains to poisoned domains, and allowing the crawler to discover the new poisoned domains. Thus, an adversary may inject arbitrarily many pages into the Common Crawl dataset, not only from the originally expired subset. We do not implement this attack following our ethics statements outlined earlier. Since Common Crawl WARC files have been hosted by Amazon on a AWS Athena (serverless service)¹⁵, domain reconnaissance work to analyze URLs is inexpensive. Scanning through 10 years of Common Crawl data to analyze domains from popular TLDs and high number of Common Crawl entries cost us USD$ 0.84. While additional analysis might somewhat increase this cost, it remains an inexpensive way to search for vulnerable domains. Buying recently expired domains, or domains that have a dangling DNS record with an active IP address is preferred, as domains that failed to return a 200-OK status in consecutive crawls seem to be moved to a lower priority. For example, among expired domains we purchased, just one domain accounts for more than 90% of all status codes among the purchased domains, while other domains we purchased as early as 12/20/2020 have seen relatively less scraping traffic across a 3 year period.16 Because Common Crawl is enormous and uncurated (to accurately reflect the content of the internet) poisoning all of Common Crawl is impractical due to size. Additionally, it is not always apparent how consumers of this data are process- ing it for downstream machine learning tasks. However, there exist many derivative datasets which are constructed by curating a relevant subset of the Common Crawl. This includes the LAION-5B image dataset [57], the text dataset known as the Pile [23], the multilingual text dataset CC-100 [78], and the CCMatrix dataset [61], a translation dataset of pairs of translated sentences. Such curation actually amplifies the power of an attack: an attack which adds 1MB of text to the Common Crawl would be poisoning a 2.5 · 10−9 fraction of the Common Crawl, but if this text bypasses the curation done for the CC-100 dataset, it could instead poison a 1.2 · 10−5 fraction of the English corpus, or even a full 9.1% of the Oromo corpus.",,,,
"Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, Furu Wei &ndash; Microsoft",Language Is Not All You Need: Aligning Perception with Language Models,https://arxiv.org/abs/2302.14045,papers,20230101Z00:00:00,,,Microsoft,"nlp/language-model, nlp/transformer-language-model, nlp/multi-modal-language-model","Text Corpora We train our model with The Pile [GBB+20] and Common Crawl (CC). The Pile is a massive English text dataset built for training large-scale language models, which is produced from a variety of data sources. We exclude data splits from GitHub, arXiv, Stack Exchange, and PubMed Central. We also include the Common Crawl snapshots (2020-50 and 2021-04) datasets, CC-Stories, and RealNews datasets [SPP+19 , SPN+22]. The entire datasets have been purged of duplicate and near-duplicate documents, as well as filtered to exclude downstream task data. Refer to Appendix B.1.1 for detailed descriptions of training text corpora.¶ Image-Caption Pairs The image-caption pairs are constructed from several datasets, including English LAION-2B [ SBV+22 ], LAION-400M [ SVB+21], COYO-700M [BPK+22 ], and Conceptual Captions [ SDGS18, CSDS21]. English LAION-2B, LAION-400M, and COYO-700M are collected from web pages of the Common Crawl web data by extracting image sources and the corresponding alt-text. Conceptual Captions are also from internet web pages. More details can be found in Appendix B.1.2. ¶ Interleaved Image-Text Data We collect interleaved multimodal data from the Common Crawl snapshot, which is a publicly available archive of web pages. We use a filtering process to select about 71M web pages from the original 2B web pages in the snapshot. We then extract the text and images from the HTML of each selected web page. For each document, we limit the number of images to five to reduce noise and redundancy. We also randomly discard half of the documents that only have one image to increase the diversity. We provide more details about the data collection process in Appendix B.1.3. By using this corpus, we enable KOSMOS-1 to handle interleaved text and image and improve its few-shot ability.","CC-MAIN-2020-50, CC-MAIN-2021-04",,"The-Pile-English, CC-Stories, RealNews, LAION-400M, LAION-2B, COYO-700M",
"Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample &ndash; Meta AI",LLaMA: Open and Efficient Foundation Language Models,https://arxiv.org/abs/2302.13971,papers,20230101Z00:00:00,"Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences","We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.",Meta AI,"nlp/language-model, nlp/transformer-language-model, nlp/multi-modal-language-model","English CommonCrawl [67%]. We preprocess five CommonCrawl dumps, ranging from 2017 to 2020, with the CCNet pipeline (Wenzek et al., 2020). This process deduplicates the data at the line level, performs language identification with a fastText linear classifier to remove non-English pages and filters low quality content with an n-gram language model. In addition, we trained a linear model to classify pages used as references in Wikipedia v.s. randomly sampled pages, and discarded pages not classified as references.","five CommonCrawl dumps, ranging from 2017 to 2020",Tensorflow-C4,,
"Khaled Ammar &ndash; University of Waterloo, Ontario, Canada",Systems and Algorithms for Dynamic Graph Processing,https://uwspace.uwaterloo.ca/bitstream/handle/10012/19195/Ammar_Khaled.pdf,papers,20230101Z00:00:00,,,"University of Waterloo, Ontario, Canada","graph-processing, web-science/hyperlinkgraph","Common Crawl experiments. Sixteen machines load 64 billion edges, index them, and track motifs in 20 batches of 10K random edge changes.",,,"WDC-hyperlinkgraph, WDC-hyperlinkgraph (2014)",
"Saffron Huang, Divya Siddarth &ndash; Collective Intelligence Project (cip.org)",Generative AI and the Digital Commons,https://arxiv.org/pdf/2303.11074.pdf,papers,20230101Z00:00:00,,,Collective Intelligence Project (cip.org),"digital-commons, public-commons, nlp/corpus-construction, nlp/language-models, nlp/generative-language-models, cc-cited-not-used","GFMs are trained on the digital commons. Generative foundation models leverage large databases of scraped information (text, code, images) from the internet to train highly capable models. This depends on the availability of public, scrapable data and leverages the “collective intelligence” of humanity, including the painstakingly edited Wikipedia, millennia’s worth of books, billions of Reddit comments, hundreds of terabytes’ worth of images, and more³ [³LAION-5B, which Stable Diffusion is trained on, has 5 billion text-image pairs (Schuhmann et al., 2022).The Pile has 100+GB of books (Gao et al., 2020)]. They also rely on non- profits like Common Crawl (which build and maintain open repositories of web crawl data), Creative Commons (for open licenses for the data used), open source libraries, and other digital infrastructure. They also take advantage of aggregated user preferences; e.g. the WebText dataset underlying the GPT family of models uses Reddit “karma scores” to select content for inclusion. All of this is common digital information and infrastructure that many people contribute to.",,,,
"Alan Chan, Herbie Bradley, Nitarshan Rajkumar &ndash; University of Cambridge, United Kingdom; Mila, Université de Montréal, Canada; EleutherAI",Reclaiming the Digital Commons: A Public Data Trust for Training Data,https://arxiv.org/abs/2303.09001,papers,20230101Z00:00:00,,"Democratization of AI means not only that people can freely use AI, but also that people can collectively decide how AI is to be used. In particular, collective decision-making power is required to redress the negative externalities from the development of increasingly advanced AI systems, including degradation of the digital commons and unemployment from automation. The rapid pace of AI development and deployment currently leaves little room for this power. Monopolized in the hands of private corporations, the development of the most capable foundation models has proceeded largely without public input. There is currently no implemented mechanism for ensuring that the economic value generated by such models is redistributed to account for their negative externalities. The citizens that have generated the data necessary to train models do not have input on how their data are to be used. In this work, we propose that a public data trust assert control over training data for foundation models. In particular, this trust should scrape the internet as a digital commons, to license to commercial model developers for a percentage cut of revenues from deployment. First, we argue in detail for the existence of such a trust. We also discuss feasibility and potential risks. Second, we detail a number of ways for a data trust to incentivize model developers to use training data only from the trust. We propose a mix of verification mechanisms, potential regulatory action, and positive incentives. We conclude by highlighting other potential benefits of our proposed data trust and connecting our work to ongoing efforts in data and compute governance.","University of Cambridge, United Kingdom; Mila, Université de Montréal, Canada; EleutherAI","digital-commons, public-commons, nlp/corpus-construction, nlp/language-models, nlp/generative-language-models, cc-cited-not-used","The data trust could also start from existing efforts, such as the Common Crawl.",,,,
"Michał Turski, Tomasz Stanisławek, Karol Kaczmarek, Paweł Dyda, Filip Graliński &ndash; Snowflake; Adam Mickiewicz University, Poznań, Poland",CCpdf: Building a High Quality Corpus for Visually Rich Documents from Web Crawl Data,https://arxiv.org/pdf/2304.14953.pdf,papers,20230101Z00:00:00,,"In recent years, the field of document understanding has progressed a lot. A significant part of this progress has been possible thanks to the use of language models pretrained on large amounts of documents. However, pretraining corpora used in the domain of document understanding are single domain, monolingual, or nonpublic. Our goal in this paper is to propose an efficient pipeline for creating a big-scale, diverse, multilingual corpus of PDF files from all over the Internet using Common Crawl, as PDF files are the most canonical types of documents as considered in document understanding. We analysed extensively all of the steps of the pipeline and proposed a solution which is a trade-off between data quality and processing time. We also share a CCpdf corpus in a form or an index of PDF files along with a script for downloading them, which produces a collection useful for language model pretraining. The dataset and tools published with this paper offer researchers the opportunity to develop even better multilingual language models.","Snowflake; Adam Mickiewicz University, Poznań, Poland","nlp/language-models, nlp/corpus-construction, document understanding, PDF","As our input we used web indexes created by Common Crawl. [...] They crawl webpages and save them into crawls dumps. A crawl dump contains billions of webpages (hundreds of terabytes of uncompressed data) and a new dump has been published nearly every month since March 2014. Some earlier, more irregular dumps starting from 2008 are also available.¹¹ Each dump also contains an index of the crawled pages. We decided to simply use the latest (and the largest) dump available at the time of writing this paper — the May 2022 dump.¹² [¹²https://commoncrawl.org/2022/06/may-2022-crawl-archive-now-available/] It contains 3.45 billion web pages, which amounts to 462 TB of uncompressed content. It would obviously be possible to apply the extraction procedure described in this paper to all crawls to obtain an even larger collection of PDFs, which would also allow for a diachronic analysis, but we wanted to focus on the most recent documents. Note that dumps contain only files considered as text files by the Common Crawl web robot. Mostly these are web pages in the HTML format, but, fortunately, PDFs are also treated as text files, being derivative of the PostScript page description language. This is not the case with, for instance, images, Excel files, DOCX files. Consequently, such files cannot be amassed using the methods described in the aforementioned papers.¶ 3.2 PDF links extraction¶ We experimented with two methods for extracting links to PDF files (step 1 in Figure 1):¶ 1. using CDX files, i.e., index server files provided by Common Crawl;¶ 2. looking for links to PDF files in WARC, i.e., raw crawl data files.¶ The first method is simpler, as CDX files are easy to download and take up only 225 GB in total. The second method might yield more links to PDF files, but:¶ – it is impossible for us to download all WARCs. Only a limited number of them can be processed, though still a significant number of PDF links can be added even if a small percentage of all WARC files are processed,¶ – there is lower probability that the file linked is available at all, be it in the crawl dump or simply at the original address.¶ In CDX files, the MIME type of a captured file is specified, and we limited ourselves to the application/pdf type.¶ Hence, in this paper, we focus on the first method, which allows to speed up the whole processing pipeline.",CC-MAIN-2022-21 (CDX),,,
"Sadia Nourin, Van Tran, Xi Jiang, Kevin Bock, Nick Feamster, Nguyen Phong Hoang, Dave Levin &ndash; University of Maryland, USA; University of Chicago, USA",Measuring and Evading Turkmenistan’s Internet Censorship: A Case Study in Large-Scale Measurements of a Low-Penetration Country,https://doi.org/10.1145/3543507.3583189,papers,20230101Z00:00:00,"Censorship Measurement, Web Filtering, Turkmenistan","Since 2006, Turkmenistan has been listed as one of the few Internet enemies by Reporters without Borders due to its extensively censored Internet and strictly regulated information control policies. Existing reports of filtering in Turkmenistan rely on a handful of vantage points or test a small number of websites. Yet, the country’s poor Internet adoption rates and small population can make more comprehensive measurement challenging. With a population of only six million people and an Internet penetration rate of only 38%, it is challenging to either recruit in-country volunteers or obtain vantage points to conduct remote network measurements at scale. We present the largest measurement study to date of Turkmenistan’s Web censorship. To do so, we developed TMC, which tests the blocking status of millions of domains across the three foundational protocols of the Web (DNS, HTTP, and HTTPS). Importantly, TMC does not require access to vantage points in the country. We apply TMC to 15.5M domains, our results reveal that Turkmenistan censors more than 122K domains, using different blocklists for each protocol. We also reverse-engineer these censored domains, identifying 6K over-blocking rules causing incidental filtering of more than 5.4M domains. Finally, we use , an open-source censorship evasion tool, to discover five new censorship evasion strategies that can defeat Turkmenistan’s censorship at both transport and application layers. We will publicly release both the data collected by TMC and the code for censorship evasion.","University of Maryland, USA; University of Chicago, USA","web-filtering, internet-censorship","[...] the payload of our probes contains domains curated from the Citizen Lab lists [5], the full Tranco list [42], and Common Crawl Project [8]. Due to limited resources of our VPS, we opt to probe the frst 10M FQDNs ranked by the Common Crawl Project instead of the full list of almost 400M FQDNs. [...] We scanned all regular expressions that TMC discovered against all FQDNs that we could obtain from DNS zone fles provided via ICANN’s Centralized Zone Data Service [ 6] and the full host list from the Common Crawl Project [8], totaling 718M FQDNs.",hyperlinkgraph,,,
"Wanrong Zhu, Jack Hessel, Anas Awadalla, Samir Yitzhak Gadre, Jesse Dodge, Alex Fang, Youngjae Yu, Ludwig Schmidt, William Yang Wang, Yejin Choi &ndash; Allen Institute for Artificial Intelligence, USA; University of California, Santa Barbara, USA; Paul G. Allen School of Computer Science, University of Washington, USA; Columbia University, USA; Yonsei University, South Korea; LAION","Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved With Text",https://arxiv.org/abs/2304.06939,papers,20230101Z00:00:00,,,"Allen Institute for Artificial Intelligence, USA; University of California, Santa Barbara, USA; Paul G. Allen School of Computer Science, University of Washington, USA; Columbia University, USA; Yonsei University, South Korea; LAION","nlp/corpus-construction, nlp/multimodal-corpora, ai/image-text-alignment","Multimodal C4 is an expansion of the text-only c4 dataset [21], which was created by taking the April 2019 snapshot from Common Crawl4 and applying several filters with the intention of retaining high-quality, natural English text. Each document in c4 consists of the text scraped from one URL. [...] e built the mmc4 dataset on top of c4 because: 1) c4 is a web-scale dataset widely adopted as a pre-training corpus [21 , 25, 9 , 29, 27 ]; 2) c4 is constructed from web pages, which frequently contain multimedia content like images: a multimodal sequence version is a natural extension; and 3) c4-en,5 the specific underlying subset from which we construct mmc4 has already been processed with several data-cleaning steps (including English- language identification by langdetect6 with at least 0.99 confidence; text deduplication removing duplicate three-sentence spans + placeholder text like “lorem ipsum{""}; and removal of any document containing any word on the “List of Dirty, Naughty, Obscene or Otherwise Bad Words”).7 See [ 21] for more information about the text-only c4. Importantly, by building on the popular text-only c4, prior text-only documentation efforts [ 11] can provide insight about potential biases and risks that could arise when training on our multimodal extension. We use the NLTK [4] sentence tokenizer to chunk each c4 document into a list of sentences.¶ Gathering images. We first retrieve the original webpages for each document in the c4-en dataset from the Common Crawl version 2019-18, which is the default version for c4. Next, we extract the URLs for downloadable images from the raw WAT files. We restrict the image extension to either png/jpeg/jpg, and exclude image URLs that contain the following tokens: tlogo, button, icon, plugin, widgetu. We attempt to download from these URLs, and resize images to a maximum dimension of 800px. We eliminate any c4 documents that do not contain valid, downloadable images at the time of collection (mid-to-late 2022). The starting point after this step is 115M documents and 1.37B images.",CC-MAIN-2019-18 (WET),Allenai-multimodal-c4 (mmc4),,
"Marius Løvold Jørgensen &ndash; UiT, The Arctic University of Norway, Norway",BacklinkDB: A Purpose-Built Backlink Database Management System,https://munin.uit.no/handle/10037/28861,papers,20230101Z00:00:00,,"In order to compile a list of all the backlinks for a given webpage, we need knowledge about all the outgoing links on the web. Traversing the web and storing all the backlink data in a database allows us to efficiently retrieve the list of backlinks for a web page on demand. However, the web consists of billions of backlinks which translates to terabytes of data. As the web is continuously evolving, the database needs to be rebuilt periodically in order for it to closely resemble the current state of the web. This thesis presents BacklinkDB, a purpose-built database management system designed for managing a backlink database. Using a series of in-memory hash indices allows for high insert throughput when building the database. The backlink data for a given domain is stored together in sections throughout the database file. This allows for the requested backlink data to be easily located. With a simple sql-inspired query language, the users can both insert and retrieve backlink data. The evaluation shows that building a purpose-built database management sys- tem allows us to make the trade-offs between which performance metrics that is important. In this thesis, we will focus on creating a scalable backlink database management system with high insert performance","UiT, The Arctic University of Norway, Norway","web-science/hyperlinkgraph, ir/backlinkdb","5.1.3 Data¶ The link data used in the experiments is downloaded from the Common Crawls website1. Common Crawl is a non-profit organization that periodically crawls the web and publicizes data. For the experiments described in this chapter, data from the August 2022 crawl² [²https://commoncrawl.org/2022/08/august-2022-crawl-archive-now-available/] is used.¶ Data prepossessing¶ Common Crawl provides data on all the indexable webpages. This data is provided in a series of warc files found in their public repository. Common Crawl also provide WAT files which are produced by processing the warc files and extracting the metadata for each webpage. The WAT files contain a list of all the outgoing links for each of the webpages.¶ All external links from the WAT file are extracted to their own link file so that they can be directly inserted into a database. Each link is stored on a separate line in the file using spaces to separate the source domain, source path, destination domain, and destination path. All the backlinks containing urls longer than 2048 characters are discarded. A link file is created for each of the WAT files. These link files contain all the information needed to build a backlink database.",CC-MAIN-2022-33 (WAT),,,
"Stefano Calzavara, Florian Hantke, Moritz Wilhelm, Alvise Rabitti, Ben Stock &ndash; CISPA Helmholtz Center for Information Security, Germany; Università Ca’ Foscari, Venezia, Italy",You Call This Archaeology? Evaluating Web Archives for Reproducible Web Security Measurements,https://swag.cispa.saarland/papers/calzavara2023archaeology.pdf,papers,20230101Z00:00:00,,"Given the dynamic nature of the Web, security measurements on it suffer from reproducibility issues. In this paper we take a systematic look into the potential of using web archives for web security measurements. We first evaluate an extensive set of web archives as potential sources of archival data, showing the superiority of the Internet Archive with respect to its competitors. We then assess the appropriateness of the Internet Archive for historical web security measurements, detecting subtleties and possible pitfalls in its adoption. Finally, we investigate the feasibility of using the Internet Archive to simulate live security measurements, using recent archival data in place of live data. Our analysis shows that archive-based security measurements are a promising alternative to traditional live security measurements, which is reproducible by design; nevertheless, it also shows potential pitfalls and shortcomings of archive-based measurements. As an important contribution, we use the collected knowledge to identify insights and best practices for future archive-based security measurements.","CISPA Helmholtz Center for Information Security, Germany; Università Ca’ Foscari, Venezia, Italy","computer-security/internet-security, web-science","Besides Memento-based archives, we also consider Common Crawl as a possible alternative source of archival data. Common Crawl archives parts of the Web once a month and stores the content as one snapshot. The reason why we use Common Crawl is that it contains a massive amount of data: its October 2022 snapshot includes more than 2.55 billion pages, with its index alone being larger than 2TB; moreover, Common Crawl was already used in a previous web security measurement [ 15, 36]. The content archived on Common Crawl is stored in form of large compressed files consisting of lists of WARC files. These WARC files hold meta information such as the requested datetime, content type, or content size, followed by the archived content.",,,,
"Peter Henderson, Xuechen Li, Dan Jurafsky, Tatsunori Hashimoto, Mark A. Lemley, Percy Liang &ndash; Stanford University, USA",Foundation Models and Fair Use,https://arxiv.org/abs/2303.15715,papers,20230101Z00:00:00,,"Existing foundation models are trained on copyrighted material. Deploying these models can pose both legal and ethical risks when data creators fail to receive appropriate attribution or compensation. In the United States and several other countries, copyrighted content may be used to build foundation models without incurring liability due to the fair use doctrine. However, there is a caveat: If the model produces output that is similar to copyrighted data, particularly in scenarios that affect the market of that data, fair use may no longer apply to the output of the model. In this work, we emphasize that fair use is not guaranteed, and additional work may be necessary to keep model development and deployment squarely in the realm of fair use. First, we survey the potential risks of developing and deploying foundation models based on copyrighted content. We review relevant U.S. case law, drawing parallels to existing and potential applications for generating text, source code, and visual art. Experiments confirm that popular foundation models can generate content considerably similar to copyrighted material. Second, we discuss technical mitigations that can help foundation models stay in line with fair use. We argue that more research is needed to align mitigation strategies with the current state of the law. Lastly, we suggest that the law and technical mitigations should co-evolve. For example, coupled with other policy mechanisms, the law could more explicitly consider safe harbors when strong technical tools are used to mitigate infringement harms. This co-evolution may help strike a balance between intellectual property and innovation, which speaks to the original goal of fair use. But we emphasize that the strategies we describe here are not a panacea and more work is needed to develop policies that address the potential harms of foundation models.","Stanford University, USA","legal/copyright, legal/fair-use, nlp/language-model, ai/foundation-model, web-crawling, robots.txt","Implied Licenses and Common Crawl. On the other hand, many creators voluntarily post their works on the internet with permissions for web crawling. It is well-established that merely posting something on the internet does not waive the intellectual property interest in the work, but many data creators use an industry-standard “robots.txt{""} file to affirmatively to include their website and data in caches and search indexes. In Field v. Google, Inc. (D. Nev. 2006) a district court held that Google could cache web content that did not disallow scraping via robots.txt, suggesting that there was an implied license and thus the use was not infringement. This license only extended to caching in that case, which does not necessarily reflect the uses of foundation models we discuss throughout this work, so it is unlikely to cover all the use cases we describe here. And the bounds of the uses covered by the robots.txt file are untested in court.21 While the issue of whether the implied license extends to foundation model training has not been resolved in litigation, it is possible that an outcome like Field v. Google, Inc. (D. Nev. 2006) would extend to some foundation model uses—in particular, for building a cached dataset and training a model.¶ It is worth noting that the use of a robots.txt header or other opt-out mechanism has implications for fair use also. Datasets and models like C4 (Raffel et al., 2019) and LAION-400M (Schuhmann, 2021), rely on CommonCrawl data which is crawled only if users explicitly allow it through their robots.txt file. CommonCrawl is able to host a snapshot of the internet largely because of fair use arguments. As the organization’s director argues, there is a transformation into a different—not easily human-readable—format, the organization does not take a snapshot of entire webpages, and the use itself is transformative (from actively presenting content to caching content) and for the public benefit (Leetaru, 2017). In Field v. Google, Inc. (D. Nev. 2006), respect for the robots.txt file also was considered in the fair use assessment with the court noting that Google in good faith followed industry standards that would prevent caching (respecting disallowing crawling via a robots.txt). It is possible, then, that providing an opt-out mechanism for data creators and respecting the robots.txt opt-out mechanism will be taken into account in assessing a fair use argument, as it was in Field v. Google, Inc. (D. Nev. 2006).²²¶ [...] Furthermore, if web-crawled data is used, restricting it to data that respects robots.txt opt-outs can make a fair use argument more tractable, though not guaranteed. As we noted before, in Field v. Google, Inc. (D. Nev. 2006), respect for the robots.txt file was considered in the fair use assessment with the court because it gave the plaintiff opportunity to opt out. This is likely why many webcrawl-based models rely on the CommonCrawl dataset as a source. Its webcrawl automatically respects robots.txt opt-outs and does not crawl every webpage in full. It is possible then that future fair use assessments could consider respecting the robots.txt opt-out—or implementing other opt-out mechanisms—favorably, as was the case in Field v. Google, Inc. (D. Nev. 2006). Conversely, ignoring a robots.txt opt-out could negatively impact a fair use assessment. However, Kapoor & Narayanan (2023) have argued that there are structural critiques of opt-out mechanisms beyond the current state of the law.¶",,,,
"Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, Gideon Mann &ndash; Bloomberg, New York, NY, USA; Bloomberg, Toronto, ON, Canada; Computer Science, Johns Hopkins University, Baltimore, MD, USA",BloombergGPT: A Large Language Model for Finance,https://arxiv.org/abs/2303.17564,papers,20230101Z00:00:00,,"The use of NLP in the realm of financial technology is broad and complex, with applications ranging from sentiment analysis and named entity recognition to question answering. Large Language Models (LLMs) have been shown to be effective on a variety of tasks; however, no LLM specialized for the financial domain has been reported in literature. In this work, we present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg's extensive data sources, perhaps the largest domain-specific dataset yet, augmented with 345 billion tokens from general purpose datasets. We validate BloombergGPT on standard LLM benchmarks, open financial benchmarks, and a suite of internal benchmarks that most accurately reflect our intended usage. Our mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks. Additionally, we explain our modeling choices, training process, and evaluation methodology. We release Training Chronicles (Appendix C) detailing our experience in training BloombergGPT.","Bloomberg, New York, NY, USA; Bloomberg, Toronto, ON, Canada; Computer Science, Johns Hopkins University, Baltimore, MD, USA","nlp/language-models, nlp/large-language-models, nlp/dataset-creation, financial markets, cc-cited-not-used",,,,,
"Joey Öhman, Severine Verlinden, Ariel Ekgren, Amaru Cuba Gyllensten, Tim Isbister, Evangelia Gogoulou, Fredrik Carlsson, Magnus Sahlgren &ndash; AI Sweden, Sweden; RISE, Sweden",The Nordic Pile: A 1.2TB Nordic Dataset for Language Modeling,https://arxiv.org/abs/2303.17183,papers,20230101Z00:00:00,,"Pre-training Large Language Models (LLMs) require massive amounts of text data, and the performance of the LLMs typically correlates with the scale and quality of the datasets. This means that it may be challenging to build LLMs for smaller languages such as Nordic ones, where the availability of text corpora is limited. In order to facilitate the development of the LLMS in the Nordic languages, we curate a high-quality dataset consisting of 1.2TB of text, in all of the major North Germanic languages (Danish, Icelandic, Norwegian, and Swedish), as well as some high-quality English data. This paper details our considerations and processes for collecting, cleaning, and filtering the dataset.","AI Sweden, Sweden; RISE, Sweden","nlp/corpus-construction, nlp/text-corpora, nlp/language-model","Therefore, The Nordic Pile is composed mostly of existing sources, with a large por- tion of these originating from derivatives of Common Crawl data, such as OSCAR (Suárez et al., 2019; Ortiz Suárez et al., 2020) and Multilingual C4 (mC4) (Xue et al., 2021), which is a language- filtered version of C4 (Raffel et al., 2020).¶ [...] Web CC: Web data derived from Common Crawl¶ Similarly, Web CC is the most prominent of our categories.",,,,
Dong Zhang &ndash; ,Should ChatGPT and Bard Share Revenue with Their Data Providers? A New Business Model for the AI Era,https://arxiv.org/abs/2305.02555,papers,20230101Z00:00:00,,"With various AI tools such as ChatGPT becoming increasingly popular, we are entering a true AI era. We can foresee that exceptional AI tools will soon reap considerable profits. A crucial question arise: should AI tools share revenue with their training data providers in additional to traditional stakeholders and shareholders? The answer is Yes. Large AI tools, such as large language models, always require more and better quality data to continuously improve, but current copyright laws limit their access to various types of data. Sharing revenue between AI tools and their data providers could transform the current hostile zero-sum game relationship between AI tools and a majority of copyrighted data owners into a collaborative and mutually beneficial one, which is necessary to facilitate the development of a virtuous cycle among AI tools, their users and data providers that drives forward AI technology and builds a healthy AI ecosystem. However, current revenue-sharing business models do not work for AI tools in the forthcoming AI era, since the most widely used metrics for website-based traffic and action, such as clicks, will be replaced by new metrics such as prompts and cost per prompt for generative AI tools. A completely new revenue-sharing business model, which must be almost independent of AI tools and be easily explained to data providers, needs to establish a prompt-based scoring system to measure data engagement of each data provider. This paper systematically discusses how to build such a scoring system for all data providers for AI tools based on classification and content similarity models, and outlines the requirements for AI tools or third parties to build it. Sharing revenue with data providers using such a scoring system would encourage more data owners to participate in the revenue-sharing program. This will be a utilitarian AI era where all parties benefit.",,"legal/copyright, legal/fair-use, nlp/language-model, ai/foundation-model, economic aspects of large language models, monetization of training data",,,,,
"Yangsibo Huang, Samyak Gupta, Zexuan Zhong, Kai Li, Danqi Chen &ndash; ",Privacy Implications of Retrieval-Based Language Models,https://arxiv.org/abs/2305.14888,papers,20230101Z00:00:00,,"Retrieval-based language models (LMs) have demonstrated improved interpretability, factuality, and adaptability compared to their parametric counterparts, by incorporating retrieved text from external datastores. While it is well known that parametric models are prone to leaking private data, it remains unclear how the addition of a retrieval datastore impacts model privacy. In this work, we present the first study of privacy risks in retrieval-based LMs, particularly kNN-LMs. Our goal is to explore the optimal design and training procedure in domains where privacy is of concern, aiming to strike a balance between utility and privacy. Crucially, we find that kNN-LMs are more susceptible to leaking private information from their private datastore than parametric models. We further explore mitigations of privacy risks. When privacy information is targeted and readily detected in the text, we find that a simple sanitization step would completely eliminate the risks, while decoupling query and key encoders achieves an even better utility-privacy trade-off. Otherwise, we consider strategies of mixing public and private data in both datastore and encoder training. While these methods offer modest improvements, they leave considerable room for future work. Together, our findings provide insights for practitioners to better understand and mitigate privacy risks in retrieval-based LMs. Our code is available at: [https://github.com/Princeton-SysML/kNNLM_privacy].",,,,,,,
"Shayne Longpre, Gregory Yauney, Emily Reif, Katherine Lee, Adam Roberts, Barret Zoph, Denny Zhou, Jason Wei, Kevin Robinson, David Mimno, Daphne Ippolito &ndash; ","A Pretrainer's Guide to Training Data: Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity",https://arxiv.org/abs/2305.13169,papers,20230101Z00:00:00,,"Pretraining is the preliminary and fundamental step in developing capable language models (LM). Despite this, pretraining data design is critically under-documented and often guided by empirically unsupported intuitions. To address this, we pretrain 28 1.5B parameter decoder-only models, training on data curated (1) at different times, (2) with varying toxicity and quality filters, and (3) with different domain compositions. First, we quantify the effect of pretraining data age. A temporal shift between evaluation data and pretraining data leads to performance degradation, which is not overcome by finetuning. Second, we explore the effect of quality and toxicity filters, showing a trade-off between performance on standard benchmarks and risk of toxic generations. Our findings indicate there does not exist a one-size-fits-all solution to filtering training data. We also find that the effects of different types of filtering are not predictable from text domain characteristics. Lastly, we empirically validate that the inclusion of heterogeneous data sources, like books and web, is broadly beneficial and warrants greater prioritization. These findings constitute the largest set of experiments to validate, quantify, and expose many undocumented intuitions about text pretraining, which we hope will help support more informed data-centric decisions in LM development.",,,,,,,
"Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli &ndash; Meta AI; Hebrew University of Jerusalem, Israel","Scaling Speech Technology to 1,000+ Languages",https://arxiv.org/abs/2305.13516,papers,20230101Z00:00:00,,"Expanding the language coverage of speech technology has the potential to improve access to information for many more people. However, current speech technology is restricted to about one hundred languages which is a small fraction of the over 7,000 languages spoken around the world. The Massively Multilingual Speech (MMS) project increases the number of supported languages by 10-40x, depending on the task. The main ingredients are a new dataset based on readings of publicly available religious texts and effectively leveraging self-supervised learning. We built pre-trained wav2vec 2.0 models covering 1,406 languages, a single multilingual automatic speech recognition model for 1,107 languages, speech synthesis models for the same number of languages, as well as a language identification model for 4,017 languages. Experiments show that our multilingual speech recognition model more than halves the word error rate of Whisper on 54 languages of the FLEURS benchmark while being trained on a small fraction of the labeled data.","Meta AI; Hebrew University of Jerusalem, Israel","nlp/speech-recognition, nlp/language-model","We evaluate this single model on FLEURS, CommonVoice, VoxPopuli and MLS. [...] During inference, we use n-gram models trained on CommonCrawl data. [...] ¶ Identifying Biased Words. We were not able to find speakers for most of the considered languages of this study and therefore use the following automatic procedure to determine religious words: for each word that occurs in the training data of MMS-lab, we compare the relative token frequency, that is, the rate at which the word type occurs in the MMS-lab data, to the relative token frequency in a general domain corpus; we use Common Crawl [Conneau et al., 2020b] as a general domain corpus. If the relative word frequency is at least twice as high in MMS-lab compared to Common Crawl, then we add it to the subset of words we include in our study. This enables us to evaluate on 51 languages of the FLEURS corpus since not all languages are covered by MMS-lab and we also need to find data in Common Crawl for each language. The automatic procedure has the added benefit of avoiding any potential biases introduced by human annotators. [...]¶ B n-gram Language Models¶ We train 5-gram language models on Common Crawl data using KenLM [Heafield, 2011] for each language in FLEURS. For languages that do not use spaces to separate words, we train 20-gram character-level language models. These languages are Mandarin Chinese (cmn), Cantonese Chinese (yue), Japanese (jpn), Thai (tha), Lao (lao), Burmese (mya) and Khmer (khm). The text is pre- processed following § 3.1.2 and we also remove emojis.³³",,,,
"Tetsuya Sakai, Sijie Tao, Nuo Chen, Yujing Li, Maria Maistro, Zhumin Chu, Nicola Ferro &ndash; Waseda University, Japan; University of Copenhagen, Denmark; Tsinghua University, P. R. C.; University of Padua, Italy","On the Ordering of Pooled Web Pages, Gold Assessments, and Bronze Assessments",https://doi.org/10.1145/3600227,papers,20230101Z00:00:00,"web search, relevance assessments, pooling, test collections, information retrieval","The present study leverages a recent opportunity we had to create a new English web search test collection for the NTCIR-16 We Want Web (WWW-4) task, which concluded in June 2022. More specifically, through the test collection construction effort, we examined two factors that may affect the relevance assessments of depth-k pools, which in turn may affect the relative evaluation of different IR systems. The first factor is the document ordering strategy for the assessors, namely, prioritisation (PRI) and randomisation (RND). PRI is a method that has been used in NTCIR tasks for over a decade; it ranks the pooled documents by a kind of pseudorelevance for the assessors. The second factor is assessor type, i.e., Gold or Bronze. Gold assessors are the topic creators and therefore they “know” which documents are (highly) relevant and which are not; Bronze assessors are not the topic creators and may lack sufficient knowledge about the topics. We believe that our study is unique in that the authors of this paper served as the Gold assessors when creating the WWW-4 test collection, which enabled us to closely examine why Bronze assessments differ from the Gold ones. Our research questions examine assessor efficiency (RQ1), inter-assessor agreement (RQ2), system ranking similarity with different qrels files (RQ3), system ranking robustness to the choice of test topics (RQ4), and the reasons why Bronze assessors tend to be more liberal than Gold assessors (RQ5). The most remarkable of our results are as follows. Firstly, in the comparisons for RQ1 through RQ4, it turned out that what may matter more than the document ordering strategy (PRI vs. RND) and the assessor type (Gold vs. Bronze) is how well-motivated and/or well-trained the Bronze assessors are. Secondly, regarding RQ5, of the documents originally judged nonrelevant by the Gold assessors contrary to the Bronze assessors in our experiments, almost one half were truly relevant according to the Gold assessors’ own reconsiderations. This result suggests that even Gold assessors are far from perfect; budget permitting, it may be beneficial to hire highly-motivated Bronze assessors in addition to Gold assessors so that they can complement each other.","Waseda University, Japan; University of Copenhagen, Denmark; Tsinghua University, P. R. C.; University of Padua, Italy","ir/test-collection, ir/web-search, ir/search-engine-evaluation, nlp/corpus-construction","The WWW-4 task introduced a new English web corpus called Chuweb21, which was constructed based on the April 2021 block of Common Crawl dataset.⁹ [⁹ https://commoncrawl.org/2021/04/april-2021-crawl-archive-now-available/] Details of the corpus construction process can be found in the WWW-4 overview paper [38]. Chuweb21 contains 82,451,337 HTMLs or 1.69 TiB of compressed content; it is publicly available.¹⁰",,Chuweb21,,
"Hanlin Li, Nicholas Vincent, Stevie Chancellor, Brent Hecht &ndash; University of California, Berkeley, USA; University of California, Davis, USA; University of Minnesota, Minneapolis, USA; Northwestern University, Evanston, USA","The Dimensions of Data Labor: A Road Map for Researchers, Activists, and Policymakers to Empower Data Producers",https://arxiv.org/pdf/2305.13238.pdf,papers,20230101Z00:00:00,,"Many recent technological advances (e.g. ChatGPT and search engines) are possible only because of massive amounts of user-generated data produced through user interactions with computing systems or scraped from the web (e.g. behavior logs, user-generated content, and artwork). However, data producers have little say in what data is captured, how it is used, or who it benefits. Organizations with the ability to access and process this data, e.g. OpenAI and Google, possess immense power in shaping the technology landscape. By synthesizing related literature that reconceptualizes the production of data for computing as ``data labor'', we outline opportunities for researchers, policymakers, and activists to empower data producers in their relationship with tech companies, e.g advocating for transparency about data reuse, creating feedback channels between data producers and companies, and potentially developing mechanisms to share data's revenue more broadly. In doing so, we characterize data labor with six important dimensions - legibility, end-use awareness, collaboration requirement, openness, replaceability, and livelihood overlap - based on the parallels between data labor and various other types of labor in the computing literature.","University of California, Berkeley, USA; University of California, Davis, USA; University of Minnesota, Minneapolis, USA; Northwestern University, Evanston, USA","legal/copyright, cc-citet-not-used, user-generated data, empowerment, data leverage","For example, publicly available texts and artwork enabled the creation of generative AI models like ChatGPT and Dall- E because model developers were able to scrape and process data from billions of web pages¹. [¹https://commoncrawl.org/2022/10/sep-oct-2022-crawl-archive-now-available/]",,,,
"Mohamed Raouf Kanfoud, Abdelkrim Bouramoul &ndash; University of Constantine 2 – Abdelhamid Mehri, El Khroub, Algeria",Tackling the multilingual and heterogeneous documents with the pre-trained language identifiers,https://doi.org/10.1080/1206212X.2023.2218236,papers,20230101Z00:00:00,,"The Web has become one of the most important data sources, and the content shared is most often multilingual, as users belong to different cultures and speak different languages. Multilingual content (document) is not suitable for many people who only need content in one language. Furthermore, dividing a multilingual document into monolingual documents helps researchers extract only the text of the desired language to use in different tasks such as training or model testing. Therefore, it is challenging to clean and divide the raw content manually. This paper presents an automatic approach to dividing a multilingual document and reassembling it into monolingual documents by examining three existing state-of-the-art tools for Language Identification (LI). We prepared different corpora with different heterogeneity characteristics for the evaluation and evaluated their code-switching pattern using three different code-switching metrics. The proposed approach reached 99% as the best accuracy result for the long segment (long text) and 90% for the mixed segment. In addition, a good correlation was found between the I-Index and accuracy with Pearson’s r = −0.998.","University of Constantine 2 – Abdelhamid Mehri, El Khroub, Algeria","nlp/language-identification, nlp/corpus-construction, multi-lingual documents","The authors collected data from a non-profit foundation, Common Crawl, which explores the Web and provides data freely to the public. The collected datasets are heterogeneous and multilingual.",,,,
"Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Baptiste Pannier, Ebtesam Almazrouei, Julien Launay &ndash; LightOn; Technology Innovation Institute, Abu Dhabi, United Arab Emirates; LPENS, École normale supérieure, Paris, France","The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only",https://falconllm.tii.ae/Falcon_LLM_RefinedWeb.pdf,papers,20230101Z00:00:00,,"Large language models are commonly trained on a mixture of filtered web data and curated “high-quality” corpora, such as social media conversations, books, or technical papers. This curation process is believed to be necessary to produce performant models with broad zero-shot generalization abilities. However, as larger models requiring pretraining on trillions of tokens are considered, it is unclear how scalable is curation and whether we will run out of unique high-quality data soon. At variance with previous beliefs, we show that properly filtered and deduplicated web data alone can lead to powerful models; even significantly outperforming models from the state-of-the-art trained on The Pile. Despite extensive filtering, the high-quality data we extract from the web is still plentiful, and we are able to obtain five trillion tokens from CommonCrawl. We publicly release an extract of 600 billion tokens from our REFINEDWEB dataset, and 1.3/7.5B parameters language models trained on it*.","LightOn; Technology Innovation Institute, Abu Dhabi, United Arab Emirates; LPENS, École normale supérieure, Paris, France","nlp/language-models, nlp/large-language-models, nlp/text-corpora","Pipelines for web data. Massive web datasets are typically built upon CommonCrawl, a publicly available scrape of the internet, which has now been running for 12 years and has collected petabytes of data. [...] We introduce MDR (MacroData Refinement), a pipeline for filtering and deduplicating web data from CommonCrawl at very large scale. [...] CommonCrawl is available in either WARC (raw HTML response), or WET files (preprocessed to only include plain text). Individual files correspond to a page at a given URL; these constitute single documents/samples. Working with WET files would spare us from running our own HTML extraction; however, in line with previous works (Gao et al., 2020; Rae et al., 2021), we found WET files to include undesirable navigation menus, ads, and other irrelevant texts. Accordingly, our pipeline starts from raw WARC files, read with the warcio library. [...] RefinedWeb is built using all CommonCrawl dumps until the 2023-06 one; it could be updated with additional dumps as they are released. The public release of RefinedWeb is a 600GT random extract of the 5,000GT of the full dataset. For all experiments, we randomly sampled from the public extract, or earlier development versions of it.",“using all CommonCrawl dumps until the 2023-06 one” (WARC files),,,
Tom Taulli &ndash; ,Data: The Fuel for Generative AI,https://doi.org/10.1007/978-1-4842-9367-6_2,papers,20230101Z00:00:00,,"A large language model (LLM) processes huge amounts of data for its generative AI systems. They are on the scale of petabytes. Consider that a petabyte is 1000 terabytes. This would hold about 500 billion pages of standard text. No doubt, the generative models for images and videos are much larger.",,"nlp/large-language-models, cc-cited-not-used",,,,,
"Gilles Adda, Ioana Vasilescu, François Yvon &ndash; Université Paris-Saclay, CNRS, LISN, Paris, France",Language Report French,https://doi.org/10.1007/978-3-031-28819-7_16,papers,20230101Z00:00:00,,"This chapter presents a survey of the current state of technologies for the automatic processing of the French language. It is based on a thorough analysis of existing tools and resources for French, and also provides an accurate presentation of the domain and its main stakeholders (Adda et al. 2022). The chapter documents the presence of French on the internet and describes in broad terms the existing technologies for the French language. It also spells out general conclusions and formulates recommendations for progress towards deep language understanding for French.","Université Paris-Saclay, CNRS, LISN, Paris, France","nlp/resources, French, nlp/language-models, nlp/text-corpora","The CommonCrawl project aggregates Web data that is orders of magnitude larger than these resources; and it is updated on a regular basis. Using French subsets of CommonCrawl, it has been possible to train large language models (LMs): FlauBERT uses a corpus of 12B running words, while CamemBERT uses the 22B words OSCAR. Other large LMs for French are available for research and commercial use; they help to boost the state-of-the-art for multiple NLP tasks.",,,,
"Asaad Alghamdi, Xinyu Duan, Wei Jiang, Zhenhai Wang, Yimeng Wu, Qingrong Xia, Zhefeng Wang, Yi Zheng, Mehdi Rezagholizadeh, Baoxing Huai, Peilun Cheng, Abbas Ghaddar &ndash; AI Cognitive Team, Tonomus; Huawei Cloud Computing Technologies Co., Ltd.; Huawei Technologies Co., Ltd.",AraMUS: Pushing the Limits of Data and Model Scale for Arabic Natural Language Processing,https://arxiv.org/abs/2306.06800,papers,20230101Z00:00:00,,,"AI Cognitive Team, Tonomus; Huawei Cloud Computing Technologies Co., Ltd.; Huawei Technologies Co., Ltd.","nlp/language-models, nlp/large-language-models, nlp/text-corpora","We mainly leverage all (up to July 2022) of the 90 Common Crawl³ monthly web scrapes in order to collect massive amount of Arabic textual data. [...] Our pre-training corpus is mainly sourced from the publicly available web scrapes of the Common Crawl (CC) project. We downloaded 90 shards of CC monthly data ranging from May 2013 (the earliest available) up to July 2022. Also, we use [...]",,,,
"Denley Lam, Letitia Li, Cory Anderson &ndash; FAST Labs, BAE Systems, Arlington, VA, USA",PDF investigation with parser differentials and ontology,https://www.techrxiv.org/articles/preprint/PDF_investigation_with_parser_differentials_and_ontology/23290277,papers,20230101Z00:00:00,,,"FAST Labs, BAE Systems, Arlington, VA, USA","data formats, PDF, PDF parsing, information-security, computer-security","Three thousand and twenty one error regexes were gathered by running our set of PDF parsers through Govdocs1 [27] and Common Crawl [28], a collection of nearly one million freely distributable document files.",,,,
"Joel E. Fischer &ndash; Mixed Reality Laboratory, School of Computer Science, University of Nottingham, United Kingdom",Generative AI Considered Harmful,https://doi.org/10.1145/3571884.3603756,papers,20230101Z00:00:00,,,"Mixed Reality Laboratory, School of Computer Science, University of Nottingham, United Kingdom","ai/ethics-of-machine-learning, nlp/large-language-models, nlp/generative-language-models, cc-cited-not-used","[⁷This article is written for the CUI 2023 “Provocations” track that “should have the potential to spark debate and discussion at the conference”] [...] It's worth noting that the lack of attribution starts with Common Crawl and similar archives; they appear to erase authorship and ownership of its sources, the largely human-written contents on websites. Instead of a heterogeneous collection of web sites (i.e., the WWW), it becomes just one homogeneous and anonymous “dataset”. This coincides with a worrying trend of these archives to frame their work as contributing to notions of “open data” and asking for “data donation” without explicating stance on ownership (you lose it) and attribution (there is none)¹⁰. [¹⁰https://commoncrawl.org/big-picture/what-you-can-do/]",,,,
"Andrea Stocco, Alexandra Willi, Luigi Libero Lucio Starace, Matteo Biagiola, Paolo Tonella &ndash; Università della Svizzera italiana, Switzerland; Università degli Studi di Napoli Federico II, Italy",Neural Embeddings for Web Testing,https://arxiv.org/pdf/2306.07400.pdf,papers,20230101Z00:00:00,,,"Università della Svizzera italiana, Switzerland; Università degli Studi di Napoli Federico II, Italy","web-testing, nlp/word-embeddings, neural-embeddings, GUI-testing","We use three existing datasets available from the study by Yandrapally et al. [12], plus an additional dataset of web pages collected by the Common Crawl project [38]. [...] For training Doc2Vec, we used an additional dataset (listed third in Table 1) of 368,927 web pages available from the Common Crawl project [38], also used in previous research [19]. We refer to this dataset as CC. Similarly to DS, the web pages in CC are also collected by crawling real-world websites.",,,,
"Yanchen Wang, Lisa Singh &ndash; Georgetown University, USA",Adding guardrails to advanced chatbots,https://arxiv.org/pdf/2306.07500.pdf,papers,20230101Z00:00:00,,,"Georgetown University, USA","ai/ethics-of-machine-learning, nlp/large-language-models, nlp/generative-language-models, cc-cited-not-used","Our analysis confirms that ChatGPT learns everything from human, including their biases. According to OpenAI, 60% of the training data come from Common Crawl, a large data set consisting of web pages, extracted metadata and text extractions through a big web crawler since 2008. Another 22% of data are from WebText2, containing all Reddit posts until December 2017 that have a score of 3 or higher. Another 16% are from books [29 ]. In their training data, more than 80% of the data are from the Internet and online discussions. Researchers have already shown that online discussions are very biased [30,31,32,33].",,,,
"Stacey Taylor, Vlado Keselj &ndash; Dalhousie University","Don’t Worry Accountants, ChatGPT Won’t Be Taking Your Job... Yet",https://web.cs.dal.ca/~vlado/papers/cai23s.pdf,papers,20230101Z00:00:00,,"ChatGPT has demonstrated the ability to generate plausible human-like text and research is underway to evaluate and benchmark its current performance in various do- mains. The research we present here provides a preliminary benchmark on ChatGPT’s ability to emulate the style and information presented in financial statement note disclo- sures. Using text from Canada’s major banks (n = 5) over the period of 2019–2021, we query ChatGPT to generate two required note disclosures and compare its text against the note disclosures written by the banks in their corporate annual reports. We find that the similarity between ChatGPT’s text and the human-authored text is very low, but also find that ChatGPT’s text is significantly more readable for one of the two disclosures (p < 0.05).",Dalhousie University,"ChatGPT, Machine Learning, Financial Statements, Similarity, Stylometry, Readability","Finally, ChatGPT was trained on the common crawl web corpora which consists of 12 years of common crawl data [30 [T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. “Language models are few-shot learners”. In: Advances in neural information processing systems 33 (2020), pp. 1877–1901.]]. That means that for each of the 5 banks, there are only 12 annual reports that ChatGPT has seen. This could have a material effect on the outcome of its generation.",,,,
"Kyle Steinfeld &ndash; University of California, Berkeley, CA, USA",Clever little tricks: A socio-technical history of text-to-image generative models,https://doi.org/10.1177/14780771231168230,papers,20230101Z00:00:00,,"The emergence of text-to-image generative models (e.g., Midjourney, DALL-E 2, Stable Diffusion) in the summer of 2022 impacted architectural visual culture suddenly, severely, and seemingly out of nowhere. To contextualize this phenomenon, this text offers a socio-technical history of text-to-image generative systems. Three moments in time, or “scenes,” are presented here: the first at the advent of AI in the middle of the last century; the second at the “reawakening” of a specific approach to machine learning at the turn of this century; the third that documents a rapid sequence of innovations, dubbed “clever little tricks,” that occurred across just 18 months. This final scene is the crux, and represents the first formal documentation of the recent history of a specific set of informal innovations. These innovations were produced by non-affiliated researchers and communities of creative contributors, and directly led to the technologies that so compellingly captured the architectural imagination in the summer of 2022. Across these scenes, we examine the technologies, application domains, infrastructures, social contexts, and practices that drive technical research and shape creative practice in this space.","University of California, Berkeley, CA, USA","ai/text-to-image-models, ai/generative-models, architecture, architectural visual culture","The LAION-400 dataset consists of 400 million image-caption pairs extracted from random selections of web pages from a web scrape that captured sites between 2014 and 2021 that was conducted by Common Crawl (a separate non- profit established in 2011 “with the goal of democratizing access to web information by producing and maintaining an open repository of web crawl data”).⁷⁵ [⁷⁵ Gil E. About common crawl. 2011. https://commoncrawl.org/about/ (accessed 04 December 2022).] Although it specifically was “not meant for any real-world production or application,”⁷⁶ [⁷⁶ Schuhmann C. LAION-400-Million open dataset. December 12, 2022. https://laion.ai/blog/laion-400-open-dataset (accessed 04 December 2022).] this dataset was used by Google to train its text-to-image generative model “Imagen” in 2022.⁷⁷ [⁷⁷ Saharia C, Chan W, Saxena S, et al. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. Epub ahead of print May 2022. DOI: 10.48550/arXiv.2205.11487]",,,,
"Zahra Moti, Asuman Senol, Hamid Bostani, Frederik Zuiderveen Borgesius, Veelasha Moonsamy, Arunesh Mathur, Gunes Acar &ndash; Radboud University, Netherlands; imec-COSIC, KU Leuven, Belgium; Ruhr University Bochum, Germany",Targeted and Troublesome: Tracking and Advertising on Children's Websites,https://arxiv.org/pdf/2308.04887.pdf,papers,20230101Z00:00:00,,"On the modern web, trackers and advertisers frequently construct and monetize users' detailed behavioral profiles without consent. Despite various studies on web tracking mechanisms and advertisements, there has been no rigorous study focusing on websites targeted at children. To address this gap, we present a measurement of tracking and (targeted) advertising on websites directed at children. Motivated by lacking a comprehensive list of child-directed (i.e., targeted at children) websites, we first build a multilingual classifier based on web page titles and descriptions. Applying this classifier to over two million pages, we compile a list of two thousand child-directed websites. Crawling these sites from five vantage points, we measure the prevalence of trackers, fingerprinting scripts, and advertisements. Our crawler detects ads displayed on child-directed websites and determines if ad targeting is enabled by scraping ad disclosure pages whenever available. Our results show that around 90% of child-directed websites embed one or more trackers, and about 27% contain targeted advertisements--a practice that should require verifiable parental consent. Next, we identify improper ads on child-directed websites by developing an ML pipeline that processes both images and text extracted from ads. The pipeline allows us to run semantic similarity queries for arbitrary search terms, revealing ads that promote services related to dating, weight loss, and mental health; as well as ads for sex toys and flirting chat services. Some of these ads feature repulsive and sexually explicit imagery. In summary, our findings indicate a trend of non-compliance with privacy regulations and troubling ad safety practices among many advertisers and child-directed websites. To protect children and create a safer online environment, regulators and stakeholders must adopt and enforce more stringent measures.","Radboud University, Netherlands; imec-COSIC, KU Leuven, Belgium; Ruhr University Bochum, Germany","web-science/tracking, web-science/advertisements, computer-security/internet-security","Applying the classifier to the Common Crawl dataset [32], we compiled a list of 2K manually verified child-directed websites. [...] Our preliminary analysis of over 500K web pages from the most popular one million websites in the Common Crawl dataset [32] showed that more than 97% of the websites have a title, 63% of the websites include a description, and 24% contain a keywords meta tag. [...] Applying this method to the WAT metadata files from the June-July 2022 Common Crawl snapshot [32], we extracted the titles and descriptions, limiting ourselves to the top million websites in the Tranco [26] or the CrUX [82] list. [...] [32] “June/July 2022 crawl archive now available – Common Crawl,” https://commoncrawl.org/2022/07/june-july-2022-crawl-archive-now-available, 2023, [Online; accessed 28. Feb. 2023].",,,,
"Juhani Luotolahti, Jenna Kanerva, Jouni Luoma, Valtteri Skantsi, Sampo Pyysalo, Veronika Laippala, Filip Ginter &ndash; University of Turku, Finland; University of Oulu, Finland",Finnish Internet Parsebank,https://www.researchsquare.com/article/rs-3138153/v1,papers,20230101Z00:00:00,,"We present a Finnish web corpus with multiple text sources and rich additional annotations. The corpus is based in large parts on a dedicated Internet crawl, supplementing data from the Common Crawl initiative and the Finnish Wikipedia. The size of the corpus is 6.2 billion tokens from 9.5 million source documents. The text is enriched with morphological analyses, word lemmas, dependency trees, named entities and text register (genre) identification. Paragraph-level scores of an n-gram language model, as well as paragraph duplication rate in each document are provided, allowing for further filtering of the dataset by the end user. Thanks to changes in the 2023 Finnish copyright legislation, the corpus is openly available for research purposes, and can also be accessed through the NoSketchEngine concordance tool and the dep search dependency tree query tool, all at https://turkunlp.org/finnish nlp.html.","University of Turku, Finland; University of Oulu, Finland","nlp/corpus-construction, language-specific corpus, web-as-corpus, nlp/dependency-tree-bank, Finnish","3.1 Data sources ¶ Our corpus is based on three primary data sources: Finnish Wikipedia, Common Crawl, and a custom web-crawl. [...] The Common Crawl dataset includes both plain text and raw HTML files, at the time without language metadata. We employed a language detection step using CLD3 as the language detector and MapReduce to download only the Finnish-language plaintext from the Amazon cloud service that hosts Common Crawl. As shown in Table2, this resulted in only a moderate amount of new data (3.2GB deduplicated text) ontop of Wikipedia (1.5GB deduplicated text). ¶ Consequently, we conducted a dedicated web crawl using the SpiderLing webcrawler (Suchomel & Pomikálek,2012). This web crawler is specifically designed forcollecting monolingual plaintext web corpora. It comprises a web crawling engine, atrigram-based language detector, and a boilerplate remover called Justext, which isresponsible for extracting plain text. Moreover, the crawler is lightweight and easyto run. The crawl was seeded with the list of all domain names in the.fi top-level domain, as well as the URLs of all Finnish text pages we gathered from CommonCrawl in the previous step. The crawl was carried out between 2014 and 2016. ¶ The final sizes of text obtained from the three sources are summarized in Table2, which shows that the dedicated webcrawl constitutes by far the largest portion of the final corpus. Note that in the newer versions of Common Crawl, a considerably stronger emphasis is placed on multilingual coverage, and the benefit of a dedicated webcrawl might be smaller but very unlikely to vanish entirely.",,,,
R. Tenis &ndash; ,"Modelling an Efficient URL Phishing Detection Approach Based on a Dense Network Model. Computer Systems Science & Engineering . 2023, Vol. 47 Issue 2, p2625-2641. 17p.",https://web.s.ebscohost.com/abstract?direct=true&profile=ehost&scope=site&authtype=crawler&jrnl=02676192&AN=169779920&h=WGjAKpK7ACB1ZcUfp8Ikhm9IcDPjsbjptgyhA5ityW47Z2oYK4JmZTEMhj6t1UhLOFgbraBWyMgS1NID6mz%2bcA%3d%3d&crl=c&resultNs=AdminWebAuth&resultLocal=ErrCrlNotAuth&crlhashurl=login.aspx%3fdirect%3dtrue%26profile%3dehost%26scope%3dsite%26authtype%3dcrawler%26jrnl%3d02676192%26AN%3d169779920,papers,20230101Z00:00:00,,"The social engineering cyber-attack is where culprits mislead the users by getting the login details which provides the information to the evil server called phishing. The deep learning approaches and the machine learning are compared in the proposed system for presenting the methodology that can detect phishing websites via Uniform Resource Locator (URLs) analysis. The legal class is composed of the home pages with no inclusion of login forms in most of the present modern solutions, which deals with the detection of phishing. Contrarily, the URLs in both classes from the login page due, considering the representation of a real case scenario and the demonstration for obtaining the rate of false-positive with the existing approaches during the legal login pages provides the test having URLs. In addition, some model reduces the accuracy rather than training the base model and testing the latest URLs. In addition, a feature analysis is performed on the present phishing domains to identify various approaches to using the phishers in the campaign. A new dataset called the MUPD dataset is used for evaluation. Lastly, a prediction model, the Dense forward-backwards Long Short Term Memory (LSTM) model (d - FBLSTM), is presented for combining the forward and backward propagation of LSMT to obtain the accuracy of 98.5% on the initiated login URL dataset.",,"computer-security/internet-security, web-security","The PhishTank provides the URLs for phishing to be gathered, and the Common Crawl provides the legal URLs.",,,,
"Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari Morcos &ndash; FAIR, Meta AI",D4: Improving LLM Pretraining via Document De-Duplication and Diversification,https://dmlr.ai/assets/accepted-papers/131/CameraReady/LLM_Data_Pruning_Paper_Camera_Ready.pdf,papers,20230101Z00:00:00,,,"FAIR, Meta AI","nlp/large-language-models, nlp/corpus-construction, deduplication","We perform all of our training runs on a version of CommonCrawl pre-processed with a CCNet (Wenzek et al., 2019) pipeline identical to the one used by Touvron et al. (2023). We add an additional step of MinHash-based de-duplication (see more details in Section A.1). Applying this common step before our experiments guarantees that any effects observed in our experiments complement the currently prevalent approach of MinHash-based data de-duplication strategies. Throughout the rest of this work, we refer to this dataset as CC-dedup. [...] A.1.2. DATASET CURATION DETAILS In this subsection, we describe how we curate CC-dedup, the starting source dataset used throughout the paper. We start with 5 CommonCrawl dumps³ [³https://commoncrawl.org/the-data/get-started/] which range from 2017 to 2020. We then use CC-net (Wenzek et al., 2019), to de-duplicate data at the paragraph level, remove non-English web pages, and filter out low-quality pages. The pipeline we use is identical to the pipeline used in Touvron et al. (2023) (see the section after the subtitle ”English CommonCrawl [67%]”, within Section 2). On top of this, we add an additional step of MinHash (Broder, 1997) de-duplication at the document-level. The parameters for MinHash are 20 hashes per signature, 20 buckets, and 1 row per bucket. These parameters are the default parameters in the spark implementation of MinHashLSH, and we did not do a hyperparameter sweep on these parameters due to compute limitations. Previous work has attempted running MinHash with much more aggressive parameters: Lee et al. (2021) and Penedo et al. use 20 buckets, 450 hashes per bucket, and 9000 signatures per hash. We conjecture that more aggressive MinHash would remove more templates, resulting in a higher-quality starting dataset, potentially making the SemDeDup step of D4 less necessary. Abbas et al. (2023) did find that the performance of MinHash from Lee et al. (2021) and SemDeDup are comparable at a fixed data selection ratio of 3.9% on C4, indicating that SemDeDup filters out similar data to aggressive MinHash does. We leave sweeping over these hyperparameters as future work. We note that since our dataset is curated from CommonCrawl dumps, there is risk that our training set contains offensive or PII content. We note, however, that this risk is no more than that of standard language modeling curation such as Touvron et al. (2023), since we use the same pipeline to filter CommonCrawl dumps.",,,,
"Liang Wang, Hyojoon Kim, Prateek Mittal, Jennifer Rexford &ndash; Princeton University, USA",RAVEN: Stateless Rapid IP Address Variation for Enterprise Networks.,https://petsymposium.org/2023/files/papers/issue3/popets-2023-0077.pdf,papers,20230101Z00:00:00,"privacy, traffic analysis, programmable data plane, P4, QUIC","Enterprise networks face increasing threats against the privacy of their clients. Existing enterprise services like Network Address Translation (NAT) offer limited privacy protection, at the cost of requiring per-flow state. In this paper, we introduce RAVEN (Rapid Address Variation for Enterprise Networks), a network-based privacy solution that is complementary to application-layer defenses. RAVEN protects privacy by frequently changing the client’s public IP address. With RAVEN, a client is not limited to using a single IP address at a given time, or even for a given connection. RAVEN goes further, breaking the association between packets that belong to the same connection by frequently changing the client’s IP address within a single connection. RAVEN achieves this through a novel division of labor: the client uses a transport protocol, like QUIC, that supports seamless connection migration, and decides when to switch its IP address, while the enterprise network actually changes the client’s IP address in a stateless manner at line rate and ensures end-to-end packet delivery. We implement RAVEN using QUIC and off-the-shelf programmable switches. We deploy RAVEN in a test IPv6 network and evaluate its defense against webpage fingerprinting attacks. Even with a strong adversary, the average precision of the best adaptive attacks drops from 0.96 to 0.84, with a 0.5% degradation in client throughput. When RAVEN changes IP addresses at unpredictable frequency, the precision of the best attacks falls to 0.78—the same effectiveness as WTF-PAD.","Princeton University, USA","computer-security/internet-security, privacy, internet traffic analysis","Webpages to fingerprint. To find webpages on GitHub Pages, we search the Common Crawl database [59] (Jan 2022 release) to extract URLs whose domain names end with “*.github.io”. From about 0.8 M URLs, we sampled 100 URLs as monitored webpages and 10,000 URLs as unmonitored. [...] [⁵⁹] The Common Crawl team. 2022. The Common Crawl Dataset. https://commoncrawl.org/.",CC-MAIN-2022-05,,,
"Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust &ndash; Google DeepMind; The University of Tokyo, Japan","A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis",https://arxiv.org/pdf/2307.12856.pdf,papers,20230101Z00:00:00,,"Pre-trained large language models (LLMs) have recently achieved better generalization and sample efficiency in autonomous web navigation. However, the performance on real-world websites has still suffered from (1) open domainness, (2) limited context length, and (3) lack of inductive bias on HTML. We introduce WebAgent, an LLM-driven agent that can complete the tasks on real websites following natural language instructions. WebAgent plans ahead by decomposing instructions into canonical sub-instructions, summarizes long HTML documents into task-relevant snippets, and acts on websites via generated Python programs from those. We design WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new pre-trained LLMs for long HTML documents using local and global attention mechanisms and a mixture of long-span denoising objectives, for planning and summarization. We empirically demonstrate that our recipe improves the success on a real website by over 50%, and that HTML-T5 is the best model to solve HTML-based tasks; achieving 14.9% higher success rate than prior SoTA on the MiniWoB web navigation benchmark and better accuracy on offline task planning evaluation.","Google DeepMind; The University of Tokyo, Japan","nlp/language-models, web-agent, autonomous web navigation, autonomous web browsing","For the dataset, we prepare 100 WARC files (April 2019) from CommonCrawl, and pre-process the raw HTML by re- moving non-Unicode and alphanumeric documents and extracting subtrees around <label> elements that have for attribute, to reduce the noise in training corpus, which results in about 3.41M examples (Table 1).",,,,
"Hynek Kydlíček, Jindřich Libovický &ndash; Univerzita Karlova, Czech Republic",A Dataset and Strong Baselines for Classification of Czech News Texts,https://arxiv.org/pdf/2307.10666.pdf,papers,20230101Z00:00:00,"News classification, NLP in Czech, News Dataset","Pre-trained models for Czech Natural Language Processing are often evaluated on purely linguistic tasks (POS tagging, parsing, NER) and relatively simple classification tasks such as sentiment classification or article classification from a single news source. As an alternative, we present CZEch~NEws~Classification~dataset (CZE-NEC), one of the largest Czech classification datasets, composed of news articles from various sources spanning over twenty years, which allows a more rigorous evaluation of such models. We define four classification tasks: news source, news category, inferred author's gender, and day of the week. To verify the task difficulty, we conducted a human evaluation, which revealed that human performance lags behind strong machine-learning baselines built upon pre-trained transformer models. Furthermore, we show that language-specific pre-trained encoder analysis outperforms selected commercially available large-scale generative language models.","Univerzita Karlova, Czech Republic","nlp/corpus-construction, nlp/text-classification, ir/information-extraction, news-classification","We create the CZE-NEC by crawling Czech news websites from CommonCrawl (§ 2.1) and use the available metadata to define classification tasks (§ 2.3). [...] We have collected the news stories text from the following six Czech online news providers: SeznamZprávy.cz, iRozhlas.cz, Novinky.cz, Deník.cz, iDnes.cz, and Aktuálně.cz. Instead of crawling the pages directly, we used the CommonCrawl archive to extract the articles.",,,,
"Hynek Kydlíček &ndash; Univerzita Karlova, Czech Republic",Implicit information extraction from news stories,http://hdl.handle.net/20.500.11956/183054,papers,20230101Z00:00:00,,,"Univerzita Karlova, Czech Republic","nlp/corpus-construction, nlp/text-classification, ir/information-extraction, news-classification","We used Common Crawl² [²https://commoncrawl.org/] as a data source, as crawling live websites would be infeasible. For extraction, we developed a custom tool C’monCrawl³ [³https://github.com/hynky1999/CmonCrawl], which allows end-to-end extraction of Common Crawl data. We then deployed it in distributed setting on Artificial Intelligence Cluster (AIC)⁴ [⁴https://aic.ufal.mff.cuni.cz/], processed 49.2M URLs and extracted 3.2M articles.",,,,
"Matyas Bohacek, Michal Bravansky, Filip Trhlík, Václav Moravec &ndash; Faculty of Social Sciences, Charles University, Prague, Czech Republic; Gymnasium of Johannes Kepler, Prague, Czech Republic; University College London, United Kingdom",Czech-ing the News: Article Trustworthiness Dataset for Czech,https://aclanthology.org/2023.wassa-1.10/,papers,20230101Z00:00:00,,"We present the Verifee dataset: a multimodal dataset of news articles with fine-grained trustworthiness annotations. We bring a diverse set of researchers from social, media, and computer sciences aboard to study this interdisciplinary problem holistically and develop a detailed methodology that assesses the texts through the lens of editorial transparency, journalist conventions, and objective reporting while penalizing manipulative techniques. We collect over 10,000 annotated articles from 60 Czech online news sources. Each item is categorized into one of the 4 proposed classes on the credibility spectrum {--} ranging from entirely trustworthy articles to deceptive ones {--} and annotated of its manipulative attributes. We fine-tune prominent sequence-to-sequence language models for the trustworthiness classification task on our dataset and report the best F-1 score of 0.53. We open-source the dataset, annotation methodology, and annotators{'} instructions in full length at https://www.verifee.ai/research/ to enable easy build-up work.","Faculty of Social Sciences, Charles University, Prague, Czech Republic; Gymnasium of Johannes Kepler, Prague, Czech Republic; University College London, United Kingdom","nlp/corpus-construction, nlp/fake-news-detection, news-classification","Initially, we assembled nearly 94, 000 articles by scraping URLs of 60 Czech news sources² obtained from Common Crawl³. These sources included mainstream journalistic websites, tabloids, independent news outlets, and websites that are part of the disinformation ecosystem (Štˇetka et al., 2021), capturing the full scope of journalistic content in the Czech Republic. [...] We applied multiple filters and balancing mechanisms based on text length and topics to mitigate deficiencies caused by inherent flaws in Common Crawl, which reduced the dataset’s size from 94, 000 to 10, 197 items. This way, we also ensured that the data is as representative of the Czech news ecosystem and as diverse as possible.",,,,
"Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jingyuan Wang, Jian-Yun Nie, Ji-Rong Wen &ndash; Gaoling School of Artificial Intelligence, Renmin University of China, China; School of Information, Renmin University of China, China; DIRO, Université de Montréal, Canada; School of Computer Science and Engineering, Beihang University, China; Beijing Key Laboratory of Big Data Management and Analysis Methods, China",The Web Can Be Your Oyster for Improving Language Models,https://aclanthology.org/2023.findings-acl.46.pdf,papers,20230101Z00:00:00,,"Pretrained language models (PLMs) encode a large amount of world knowledge. However, as such knowledge is frozen at the time of model training, the models become static and limited by the training data at that time. In order to further improve the capacity of PLMs for knowledge-intensive tasks, we consider augmenting PLMs with the large-scale web using search engine. Unlike previous augmentation sources (e.g., Wikipedia data dump), the web provides broader, more comprehensive and constantly updated information. In this paper, we present a web-augmented PLM – UniWeb, which is trained over 16 knowledge-intensive tasks in a unified text-to-text format. Instead of simply using the retrieved contents from web, our approach has made two major improvements. Firstly, we propose an adaptive search engine assisted learning method that can self-evaluate the confidence level of PLM’s predictions, and adaptively determine when to refer to the web for more data, which can avoid useless or noisy augmentation from web. Secondly, we design a pretraining task, i.e., continual knowledge learning, based on salient spans prediction, to reduce the discrepancy between the encoded and retrieved knowledge. Experiments on a wide range of knowledge-intensive tasks show that our model significantly outperforms previous retrieval-augmented methods.","Gaoling School of Artificial Intelligence, Renmin University of China, China; School of Information, Renmin University of China, China; DIRO, Université de Montréal, Canada; School of Computer Science and Engineering, Beihang University, China; Beijing Key Laboratory of Big Data Management and Analysis Methods, China","nlp/large-language-models,","[...] we select the CCNet snapshot corresponding to the August 2019 Common Crawl snapshot which covers a wide range of 134M web documents and finally yields 906M passages of 100 tokens. CCNet processes Common Crawl through deduplication, language identification and quality filtering based on perplexity calculated by a lan- guage model.",,,CCNet,
"Thuat Nguyen, Chien Van Nguyen, Viet Dac Lai, Hieu Man, Nghia Trung Ngo, Franck Dernoncourt, Ryan A. Rossi, Thien Huu Nguyen &ndash; University of Oregon, USA; Adobe Research, USA","CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages",https://arxiv.org/pdf/2309.09400.pdf,papers,20230101Z00:00:00,,"The driving factors behind the development of large language models (LLMs) with impressive learning capabilities are their colossal model sizes and extensive training datasets. Along with the progress in natural language processing, LLMs have been frequently made accessible to the public to foster deeper investigation and applications. However, when it comes to training datasets for these LLMs, especially the recent state-of-the-art models, they are often not fully disclosed. Creating training data for high-performing LLMs involves extensive cleaning and deduplication to ensure the necessary level of quality. The lack of transparency for training data has thus hampered research on attributing and addressing hallucination and bias issues in LLMs, hindering replication efforts and further advancements in the community. These challenges become even more pronounced in multilingual learning scenarios, where the available multilingual text datasets are often inadequately collected and cleaned. Consequently, there is a lack of open-source and readily usable dataset to effectively train LLMs in multiple languages. To overcome this issue, we present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for LLM development. Our dataset undergoes meticulous cleaning and deduplication through a rigorous pipeline of multiple stages to accomplish the best quality for model training, including language identification, URL-based filtering, metric-based cleaning, document refinement, and data deduplication. CulturaX is fully released to the public in HuggingFace to facilitate research and advancements in multilingual LLMs: [https://huggingface.co/datasets/uonlp/CulturaX]","University of Oregon, USA; Adobe Research, USA","nlp/corpus-construction, dataset-creation, nlp/large-language-models",,,CulturaX,"Tensorflow-C4-Multilingual, OSCAR",
"Sneha Kudugunta, Isaac Caswell, Biao Zhang, Xavier Garcia, Christopher A. Choquette-Choo, Katherine Lee, Derrick Xin, Aditya Kusupati, Romi Stella, Ankur Bapna, Orhan Firat &ndash; Google DeepMind; Google Research",MADLAD-400: A Multilingual And Document-Level Large Audited Dataset,https://arxiv.org/pdf/2309.04662.pdf,papers,20230101Z00:00:00,,"We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models 1 available to the research community.",Google DeepMind; Google Research,"nlp/corpus-construction, dataset-creation, nlp/large-language-models","A common approach to creating such datasets is to mine language specific data from general web crawls such as CommonCrawl [57, 43, 68] to create datasets. We simply take this approach and scale it. We train a document-level LangID model on 498 languages to obtain CommonCrawl annotations at a document level and obtain a 5-trillion token, document-level monolingual dataset. [...] First, we collect as large a dataset of unlabeled web text as possible. More specifically, we use all available snapshots of CommonCrawl2 as of August 20, 2022. After some preliminary data cleaning, we use a highly multilingual LangID model to provide document-level annotations (Section 2.2). Finally, we conduct a self-audit (Section 2.4), or quality review, of this preliminary dataset partitioned by language, and design filters to remove noisy content. When appropriate, we correct language names and remove languages from the preliminary dataset. We note that building MADLAD-400 was an iterative process, and that while we describe one major quality review in depth, we conducted several stages of filtering.",,MADLAD-400,,
"Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, Jimmy Ba &ndash; University of Toronto, Canada; University of Cambridge, United Kingdom; Princeton University, USA",OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text,https://arxiv.org/abs/2310.06786,papers,20230101Z00:00:00,,"There is growing evidence that pretraining on high quality, carefully thought-out tokens such as code or mathematics plays an important role in improving the reasoning abilities of large language models. For example, Minerva, a PaLM model finetuned on billions of tokens of mathematical documents from arXiv and the web, reported dramatically improved performance on problems that require quantitative reasoning. However, because all known open source web datasets employ preprocessing that does not faithfully preserve mathematical notation, the benefits of large scale training on quantitive web documents are unavailable to the research community. We introduce OpenWebMath, an open dataset inspired by these works containing 14.7B tokens of mathematical webpages from Common Crawl. We describe in detail our method for extracting text and LaTeX content and removing boilerplate from HTML documents, as well as our methods for quality filtering and deduplication. Additionally, we run small-scale experiments by training 1.4B parameter language models on OpenWebMath, showing that models trained on 14.7B tokens of our dataset surpass the performance of models trained on over 20x the amount of general language data. We hope that our dataset, openly released on the Hugging Face Hub, will help spur advances in the reasoning abilities of large language models.","University of Toronto, Canada; University of Cambridge, United Kingdom; Princeton University, USA","mathematics, mathematical text, nlp/corpus-construction, dataset-creation, nlp/large-language-models","We extract documents from Common Crawl¹ [¹ https://commoncrawl.org/], applying our pipeline to extract text while preserving mathematical content in the form of LATEX equations. We then filter the documents, ensuring that only high-quality English mathematical documents are kept. Finally, we deduplicate the dataset, resulting in 14.7B tokens of high-quality mathematical content suitable for both pretraining and finetuning large language models.",,OpenWebMath,,
"Minh-Hoang Dang, Alban Gaignard, Hala Skaf-Molli, Pascal Molli &ndash; Nantes Université, France",Schema.org: How is it used?,https://hal.science/hal-04250523/document,papers,20230101Z00:00:00,,"Schema.org defines a shared vocabulary for semantically annotating web pages. Due to the vast and diverse nature of the contributed annotations, it is not easy to understand the widespread use of Schema.org. In this poster, we rely on the characteristic sets computed from the web data commons datasets to provide insights into property combinations on various websites. Thanks to in-depth experiments, this poster establishes a comprehensive observatory for schema.org annotations, visually presenting the most frequently used classes, commonly used combinations of properties per class, the average number of filled properties per class, and the classes with the greatest property coverage. These findings are valuable for both the communities involved in defining Schema.org vocabularies and the users of these vocabularies.","Nantes Université, France","semantic web, linked data","The Web Data Commons [3, 4] project extracts semantic annotations from the Common Crawl annually since 20102. It provides a reference dataset to study the evolution and adoption of semantic annotations in web pages. The extracted data is represented with RDF quads, which consist of RDF statements along with the URL of the corresponding web page. The abundance of annotations on the web and the diversity of contributors raise challenges in understanding how Schema.org is used at the web-scale. [...] We used the JSON-LD (most common formats) dataset from the WebDataCommons [3 ] released in October 2021. This dataset is derived from crawling 35 million websites, of which 42% utilized Web Entities. It comprises 82 billion RDF quads (16 terabytes uncompressed) and 6.7 billion Schema.org entities.",,,WDC-triples,
"Qi Yan, Raihan Seraj, Jiawei He, Lili Meng, Tristan Sylvain &ndash; University of British Columbia, Canada; McGill University, Canada; Borealis AI",AutoCast++: Enhancing World Event Prediction with Zero-shot Ranking-based Context Retrieval,https://arxiv.org/pdf/2310.01880.pdf,papers,20230101Z00:00:00,,"Machine-based prediction of real-world events is garnering attention due to its potential for informed decision-making. Whereas traditional forecasting predominantly hinges on structured data like time-series, recent breakthroughs in language models enable predictions using unstructured text. In particular, (Zou et al., 2022) unveils AutoCast, a new benchmark that employs news articles for answering forecasting queries. Nevertheless, existing methods still trail behind human performance. The cornerstone of accurate forecasting, we argue, lies in identifying a concise, yet rich subset of news snippets from a vast corpus. With this motivation, we introduce AutoCast++, a zero-shot ranking-based context retrieval system, tailored to sift through expansive news document collections for event forecasting. Our approach first re-ranks articles based on zero-shot question-passage relevance, honing in on semantically pertinent news. Following this, the chosen articles are subjected to zero-shot summarization to attain succinct context. Leveraging a pre-trained language model, we conduct both the relevance evaluation and article summarization without needing domain-specific training. Notably, recent articles can sometimes be at odds with preceding ones due to new facts or unanticipated incidents, leading to fluctuating temporal dynamics. To tackle this, our re-ranking mechanism gives preference to more recent articles, and we further regularize the multi-passage representation learning to align with human forecaster responses made on different dates. Empirical results underscore marked improvements across multiple metrics, improving the performance for multiple-choice questions (MCQ) by 48% and true/false (TF) questions by up to 8%.","University of British Columbia, Canada; McGill University, Canada; Borealis AI","information retrieval, event detection, nlp/large-language-models, news","We incorporate news articles from the Common Crawl corpus¹ [¹Common Crawl - Open Repository of Web Crawl Data, https://commoncrawl.org/] spanning 2016 to 2022 for retrieval purpose.",,,,
"Gus Eggert, Kevin Huo, Mike Biven, Justin Waugh &ndash; Approximate Labs, Boulder, USA",TabLib: A Dataset of 627M Tables with Context,https://arxiv.org/pdf/2310.07875.pdf,papers,20230101Z00:00:00,,"It is well-established that large, diverse datasets play a pivotal role in the performance of modern AI systems for text and image modalities. However, there are no datasets for tabular data of comparable size and diversity to those available for text and images. Thus we present {""}TabLib'', a compilation of 627 million tables totaling 69 TiB, along with 867B tokens of context. TabLib was extracted from numerous file formats, including CSV, HTML, SQLite, PDF, Excel, and others, sourced from GitHub and Common Crawl. The size and diversity of TabLib offer considerable promise in the table modality, reminiscent of the original promise of foundational datasets for text and images, such as The Pile and LAION.","Approximate Labs, Boulder, USA","dataset creation, web tables","We used the latest crawl at the time, which was CC-MAIN-2023-23. Common Crawl results are serialized using the WARC format, which includes “request” and “response” records. We only considered response records. We discarded “truncated” responses which had response lengths that exceed Common Crawl’s limit. If a WARC-Identified-Payload- Type record header was included in the record, then we used its mimetype as a hint for detecting the content type, otherwise we used the Content-Type header in the HTTP response, and followed a similar approach as GitHub (use the mimetype if possible, otherwise use libmagic). About 20% of WARC files were dropped due to issues parsing certain HTML elements with Pandas.",,,,
"Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou &ndash; Alibaba Group",Data-Juicer: A One-Stop Data Processing System for Large Language Models,https://arxiv.org/pdf/2309.02033.pdf,papers,20230101Z00:00:00,,"The immense evolution in Large Language Models (LLMs) has underscored the importance of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance. Existing open-source tools for LLM data processing are mostly tailored for specific data recipes. To continuously uncover the potential of LLMs, incorporate data from new sources, and improve LLMs' performance, we build a new system named Data-Juicer, with which we can efficiently generate diverse data recipes, explore different possibilities in forming data mixtures, and evaluate their effects on model performance. Different from traditional data-analytics pipelines, Data-Juicer faces some unique challenges. Firstly, the possible data sources for forming data recipes are truly heterogeneous and massive with various qualities. Secondly, it is extremely expensive to precisely evaluate data recipes' impact on LLMs' performance. Thirdly, the end users of Data-Juicer, model developers, need sufficient flexibility to configure and evaluate different data recipes. Data-Juicer features a fine-grained abstraction of pipelines for constructing data recipes, with over 50 built-in operators for easy composition and extension. By incorporating visualization and auto-evaluation capabilities, Data-Juicer enables a timely feedback loop for both LLM pre-training and fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems for LLM training, evaluation, and distributed computing. The data recipes derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5% higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and tutorials are released, calling for broader data-centric research on training and understanding LLMs.",Alibaba Group,"dataset creation, nlp/corpus-construction, nlp/large-language-models",,,,,
"Wang Tongjing, Evert Meijers, Ziyu Bao, Huijuan Wang &ndash; Utrecht University, The Netherlands; Delft University of Technology, The Netherlands",Intercity networks and urban performance: a geographical text mining approach,https://www.tandfonline.com/doi/pdf/10.1080/12265934.2023.2253193,papers,20230101Z00:00:00,,"Compared to the burgeoning literature discussing the importance of agglomeration externalities for development, limited attention has been given to network externalities. This is largely due to limited data availability. We propose a general measure to proxy city network externalities based on toponym co-occurrences that indicate the relatedness between cities. This paper extracts intercity relationships based on the co-occurrence of Chinese place names on 2.5 billion webpages. We calculate and map absolute and relative network positions, which we use to explain urban labour productivity. We found that a stronger embeddedness in networks of cities is significantly and positively associated with urban productivity. Smaller cities benefit comparatively more from being well embedded in city networks, suggesting that these relations can compensate for a lack of agglomeration externalities. We also compare the importance for urban performance of city network externalities vis-à-vis agglomeration externalities. City network externalities turn out to be more important in explaining urban performance than agglomeration externalities. This calls for new theorizing on a relational approach to urban and regional development. Rather than stimulating further concentration of urbanization, our findings suggest that fostering relationships between cities is a viable alternative urban development strategy. We conclude with suggestions for a research agenda that delves deeper into city network externalities.","Utrecht University, The Netherlands; Delft University of Technology, The Netherlands","geography, web-mining, dataset creation","[...] like Meijers and Peris we prefer using corpora from the CommonCrawl Archive of webpages as our text corpus. We used the entire April 2019 database for processing and conducting experiments. The original database we extracted contains about 6.98 TB of uncompressed text containing 2.5 billion web pages crawled between 18 and 26 April 2019. We selected all pages using at least 10 Chinese characters. The filtered corpus contains approximately 110 billion Chinese words on 91 million pages from 1067 different domains. Over 91% of the tokens are from websites registered under the four top-level domains (TLD): .com (62.23%), .cn (14.80%), .net (7.86%) and .org (2.68%). The four TLDs make up about 87.57% of pages.",,,,
"Mikko Aulamo, Nikolay Bogoychev, Shaoxiong Ji, Graeme Nail, Gema Ramírez-Sánchez, Jörg Tiedemann, Jelmer Van Der Linde, Jaume Zaragoza &ndash; University of Helsinki, ; University of Edinburgh, United Kingdom; Prompsit Language Engineering",HPLT: High Performance Language Technologies,https://aclanthology.org/2023.eamt-1.61.pdf,papers,20230101Z00:00:00,,"We describe the High Performance Language Technologies project (HPLT), a 3-year EU-funded project started in September 2022. HPLT will build a space combining petabytes of natural language data with large-scale model training. It will derive monolingual and bilingual datasets from the Internet Archive and CommonCrawl and build efficient and solid machine translation (MT) as well as large language models (LLMs). HPLT aims at providing free, sustainable and reusable datasets, models and workflows at scale using high-performance computing (HPC).","University of Helsinki, ; University of Edinburgh, United Kingdom; Prompsit Language Engineering","nlp/corpus-construction, nlp/large-language-models","Datasets: Starting from 7 PB of web-crawled data from the Internet Archive3 and 5 from CommonCrawl,4 we will derive monolingual and bilin- gual datasets for systematic LLM and MT building with a large language coverage.",,,,
"Raffaele Sommese, Roland van Rijswijk-Deij, Mattijs Jonker &ndash; University of Twente, The Netherlands",This Is a Local Domain: On Amassing Country-Code Top-Level Domains from Public Data,https://arxiv.org/pdf/2309.01441.pdf,papers,20230101Z00:00:00,,"Domain lists are a key ingredient for representative censuses of the Web. Unfortunately, such censuses typically lack a view on domains under country-code top-level domains (ccTLDs). This introduces unwanted bias: many countries have a rich local Web that remains hidden if their ccTLDs are not considered. The reason ccTLDs are rarely considered is that gaining access -- if possible at all -- is often laborious. To tackle this, we ask: what can we learn about ccTLDs from public sources? We extract domain names under ccTLDs from 6 years of public data from Certificate Transparency logs and Common Crawl. We compare this against ground truth for 19 ccTLDs for which we have the full DNS zone. We find that public data covers 43%-80% of these ccTLDs, and that coverage grows over time. By also comparing port scan data we then show that these public sources reveal a significant part of the Web presence under a ccTLD. We conclude that in the absence of full access to ccTLDs, domain names learned from public sources can be a good proxy when performing Web censuses.","University of Twente, The Netherlands","dataset creation, internet domain names, ccTLDs, country-code top-level domains","Common Crawl – Common Crawl is a nonprofit organization that builds and maintains a sizable, open repository of Web crawl data, offering years and petabytes of Web page data. The Common Crawl data lives in Amazon S3 as part of Amazon’s Open Data Sponsorship Program and is free for anyone to access. Crawls are seeded from a set of candidate domain names and the crawler follows links leading to other pages. Crawls are performed approximately every one to two months and contain raw Web page data, metadata and text extractions, among others. Relevant to our work, crawls accumulate many tens of millions of registered domain names that one can extract from the so-called URL index. [...] For Common Crawl we consider data for crawl snapshots dated between June 2017 and June 2023 (inclusive). There are 58 such snapshots, collectively accounting for 127 million registered domain names. The combined total number of unique registered domain names in our consolidated dataset is 430 million.",,,,
"Isaac Caswell, Lisa Wang, Isabel Papadimitriou &ndash; Google Research; Google DeepMind; Computer Science Department, Stanford University",Separating the Wheat from the Chaff with BREAD: An open-source benchmark and metrics to detect redundancy in text,https://arxiv.org/abs/2311.06440,papers,20230101Z00:00:00,,"Data quality is a problem that perpetually resurfaces throughout the field of NLP, regardless of task, domain, or architecture, and remains especially severe for lower-resource languages. A typical and insidious issue, affecting both training data and model output, is data that is repetitive and dominated by linguistically uninteresting boilerplate, such as price catalogs or computer-generated log files. Though this problem permeates many web-scraped corpora, there has yet to be a benchmark to test against, or a systematic study to find simple metrics that generalize across languages and agree with human judgements of data quality. In the present work, we create and release BREAD, a human-labeled benchmark on repetitive boilerplate vs. plausible linguistic content, spanning 360 languages. We release several baseline CRED (Character REDundancy) scores along with it, and evaluate their effectiveness on BREAD. We hope that the community will use this resource to develop better filtering methods, and that our reference implementations of CRED scores can become standard corpus evaluation tools, driving the development of cleaner language modeling corpora, especially in low-resource languages.","Google Research; Google DeepMind; Computer Science Department, Stanford University","nlp/corpus-construction, data quality, nlp/boilerplate-removal, redundancy","BREAD consists of randomly-chosen documents from the multilingual, common-crawl- based MADLAD-400 dataset (Kudugunta et al., 2023), which are then annotated by expert NLP- practitioner annotators.",,,MADLAD-400,
"Sneha Kudugunta, Isaac Rayburn Caswell, Biao Zhang, Xavier Garcia, Derrick Xin, Aditya Kusupati, Romi Stella, Ankur Bapna, Orhan Firat &ndash; Google DeepMind; Google Research",MADLAD-400: A Multilingual And Document-Level Large Audited Dataset,https://openreview.net/forum?id=Y45ZCxslFx,papers,20230101Z00:00:00,,"We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community.",Google DeepMind; Google Research,"nlp/corpus-construction, nlp/multi-lingual-corpus","A common approach to creating such datasets is to mine language specific data from general web crawls such as CommonCrawl [52, 38, 63] to create datasets. We simply take this approach and scale it. We train a document-level LangID model on 498 languages to obtain CommonCrawl annotations at a document level and obtain a 5-trillion token, document-level monolingual dataset.¶ [...] First, we collect as large a dataset of unlabeled web text as possible. More specifically, we use all available snapshots of CommonCrawl3 as of August 20, 2022. After some preliminary data cleaning, we use a highly multilingual LangID model to provide document-level annotations (Section 2.2). Finally, we conduct a self-audit (Section 2.4), or quality review, of this preliminary dataset partitioned by language, and design filters to remove noisy content. When appropriate, we correct language names and remove languages from the preliminary dataset. We note that building MADLAD-400 was an iterative process, and that while we describe one major quality review in depth, we conducted several stages of filtering.",,MADLAD-400,,