Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10M<n<100M
Language Creators:
found
Annotations Creators:
no-annotation
Source Datasets:
original
ArXiv:
Tags:
License:

Update the "Dataset Creation" portion of the README.md with additional information from Bandy and Vincent (2021).

#3
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -116,17 +116,20 @@ The data fields are the same among all splits.
116
 
117
  ### Curation Rationale
118
 
119
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
 
121
  ### Source Data
122
 
123
  #### Initial Data Collection and Normalization
124
 
125
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
126
 
127
  #### Who are the source language producers?
128
 
129
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
130
 
131
  ### Annotations
132
 
@@ -140,7 +143,8 @@ The data fields are the same among all splits.
140
 
141
  ### Personal and Sensitive Information
142
 
143
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
144
 
145
  ## Considerations for Using the Data
146
 
 
116
 
117
  ### Curation Rationale
118
 
119
+ The books in BookCorpus were self-published by authors on smashwords.com, likely with a range of motivations. While we can safely assume that authors publishing free books via smashwords.com had some motivation to share creative works with the world, there is no way to verify they were interested in training AI systems. For example, many authors in BookCorpus explicitly license their books “for [the reader’s] personal enjoyment only,” limiting reproduction and redistribution. When notified about BookCorpus and its uses, one author from Smashwords said “it didn’t even occur to me that a machine could read my book” [https://www.theguardian.com/books/2016/sep/28/google-swallows-11000-novels-to-improve-ais-conversation].
120
 
121
  ### Source Data
122
 
123
  #### Initial Data Collection and Normalization
124
 
125
+ Per [Bandy and Vincent (2021)](https://arxiv.org/abs/2105.05241), the text for each instance (book) was acquired via download from smashwords.com. The data was collected via scraping software. While the original scraping program is not available, replicas (e.g. https://github.com/BIGBALLON/cifar-10-cnn.) operate by first scraping smashwords.com to generate a list of links to free ebooks, downloading each ebook as an epub file, then converting each epub file into a plain text file. Books were included in the original Book-Corpus if they were available for free on smashwords.com and longer than 20,000 words, thus representing a non-probabilistic convenience sample. The 20,000 word cutoff likely comes from the Smashwords interface, which provides a filtering tool to only display books “Over 20K words.” The individuals involved in collecting BookCorpus and their compensation are unknown. The original paper by Zhu and Kiros et al. (https://yknzhu.wixsite.com/mbweb) does not specify which authors collected and processed the data, nor how they were compensated. The timeframe over which BookCorpus was collected is unknown as well. BookCorpus was originally collected some time before the original paper (https://yknzhu.wixsite.com/mbweb) was presented at the International Conference on Computer Vision (ICCV) in December 2015. It is unlikely that any ethical review processes were conducted. Zhu and Kiros et al. (https://yknzhu.wixsite.com/mbweb) do not mention an Institutional Review Board (IRB) or other ethical review process involved in their original paper.
126
+ The dataset is related to people because each book is associated with an author (please see the "Personal and Sensitive Information" section for more information on this topic).
127
+
128
+ Bandy and Vincent also assert that while the original paper by Zhu and Kiros et al. (https://yknzhu.wixsite.com/mbweb) did not use labels for supervised learning, each book is labeled with genres. It appears genres are supplied by authors themselves. It is likely that some cleaning was done on the BookCorpus dataset. The .txt files in BookCorpus seem to have been partially cleaned of some preamble text and postscript text, however, Zhu and Kiros et al. (https://yknzhu.wixsite.com/mbweb) do not mention the specific cleaning steps. Also, many files still contain some preamble and postscript text, including many sentences about licensing and copyrights. For example, the sentence “please do not participate in or encourage piracy of copyrighted materials in violation of the author’s rights” occurs at least 40 times in the BookCorpus books_in_sentences files. Additionally, based on samples we reviewed from the original BookCorpus, the text appears to have been tokenized to some degree (e.g. contractions are split into two words), though the exact procedure used is unclear. It is unknown if some of the "raw" data was saved in addition to the clean data. While the original software used to clean the BookCorpus dataset is not available, replication attempts provide some software for turning .epub files into .txt files and subsequently cleaning them.
129
 
130
  #### Who are the source language producers?
131
 
132
+ Per [Bandy and Vincent (2021)](https://arxiv.org/abs/2105.05241), the data in BookCorpus was produced by self-published authors on smashwords.com and aggregated using scraping software by Zhu and Kiros et al.
133
 
134
  ### Annotations
135
 
 
143
 
144
  ### Personal and Sensitive Information
145
 
146
+ Per [Bandy and Vincent (2021)](https://arxiv.org/abs/2105.05241), it is unlikely that authors were notified about data collection from their works. Discussing BookCorpus in 2016, Richard Lea wrote in The Guardian that “The only problem is that [researchers] didn’t ask” (https://www.theguardian.com/books/2016/sep/28/google-swallows-11000-novels-to-improve-ais-conversation). When notified about BookCorpus and its uses, one author from Smashwords said “it didn’t even occur to me that a machine could read my book” (https://www.theguardian.com/books/2016/sep/28/google-swallows-11000-novels-to-improve-ais-conversation).
147
+ Authors did not consent to the collection and use of their books. While authors on smashwords.com published their books for free, they did not consent to including their work in BookCorpus, and many books contain copyright restrictions intended to prevent redistribution. As described by Richard Lea in The Guardian (https://www.theguardian.com/books/2016/sep/28/google-swallows-11000-novels-to-improve-ais-conversation), many books in BookCorpus include: "a copyright declaration that reserves “all rights”, specifies that the ebook is “licensed for your personal enjoyment only”, and offers the reader thanks for “respecting the hard work of this author.”' Considering these copyright declarations, authors did not explicitly consent to include their work in BookCorpus or related datasets. Using the framework of consentful tech (https://www.consentfultech.io), a consent- ful version of BookCorpus would ideally involve author consent that is Freely given, Reversible, Informed, Enthusiastic, and Specific (FRIES). It is unlikely that authors were provided with a mechanism to revoke their consent in the future or for certain uses. For example, if an author released a book for free before BookCorpus was collected, then changed the price and/or copyright after BookCorpus was collected, the book likely remained in BookCorpus. In fact, preliminary analysis suggests that this is the case for at least 438 books in BookCorpus which are no longer free to download from Smashwords, and would cost $1,182.21 to purchase as of April 2021.
148
 
149
  ## Considerations for Using the Data
150