Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1M<n<10M
Language Creators:
found
Annotations Creators:
no-annotation
ArXiv:
Tags:
License:
plip commited on
Commit
ad814dd
1 Parent(s): 2edc3ca

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +162 -0
README.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Team-PIXEL/rendered-bookcorpus
13
+ size_categories:
14
+ - 1M<n<10M
15
+ source_datasets:
16
+ - rendered|BookCorpusOpen
17
+ task_categories:
18
+ - masked-auto-encoding
19
+ - rendered-language-modelling
20
+ task_ids:
21
+ - masked-auto-encoding
22
+ - rendered-language-modeling
23
+ paperswithcode_id: bookcorpus
24
+ ---
25
+
26
+ # Dataset Card for BookCorpusOpen
27
+
28
+ ## Dataset Description
29
+
30
+ - **Homepage:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel)
31
+ - **Repository:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel)
32
+ - **Papers:** [Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
33
+ ](https://arxiv.org/abs/1506.06724), [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991)
34
+ - **Point of Contact:** [Phillip Rust](mailto:p.rust@di.ku.dk)
35
+ - **Size of downloaded dataset files:** 63.58 GB
36
+ - **Size of the generated dataset:** 63.59 GB
37
+ - **Total amount of disk used:** 127.17 GB
38
+
39
+ ### Dataset Summary
40
+
41
+ This dataset is a version of the BookCorpus available at [https://huggingface.co/datasets/bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen) with examples rendered as images with resolution 16x8464 pixels.
42
+
43
+ The original BookCorpus was introduced by Zhu et al. (2015) in [Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books](https://arxiv.org/abs/1506.06724) and contains 17868 books of various genres. The rendered BookCorpus was used to train the [PIXEL](https://huggingface.co/Team-PIXEL/pixel-base) model introduced in the paper [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991) by Phillip Rust, Jonas F. Lotz, Emanuele Bugliarello, Elizabeth Salesky, Miryam de Lhoneux, and Desmond Elliott.
44
+
45
+ The BookCorpusOpen dataset was rendered book-by-book into 5.4M examples containing approximately 1.1B words in total. The dataset is stored as a collection of 162 parquet files. It was rendered using the script openly available at [https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_bookcorpus.py](https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_bookcorpus.py). The text renderer uses a PyGame backend and a collection of merged Google Noto Sans fonts. The PyGame backend does not support complex text layouts (e.g. ligatures and right-to-left scripts) or emoji, so occurrences of such text in the BookCorpus have not been rendered accurately.
46
+ Each example consists of a "pixel_values" field which stores a 16x8464 (height, width) grayscale image containing the rendered text, and an integer value "num_patches" which stores how many image patches (when splitting the image into 529 non-overlapping patches of resolution 16x16 pixels) in the associated images contain actual text, i.e. are neither blank (fully white) nor are the fully black end-of-sequence patch.
47
+
48
+ ## Dataset Structure
49
+
50
+ ### Data Instances
51
+
52
+ - **Size of downloaded dataset files:** 63.58 GB
53
+ - **Size of the generated dataset:** 63.59 GB
54
+ - **Total amount of disk used:** 127.17 GB
55
+
56
+ An example of 'train' looks as follows.
57
+ ```
58
+ {
59
+ "pixel_values": <PIL.PngImagePlugin.PngImageFile image mode=L size=8464x16
60
+ "num_patches": "498"
61
+ }
62
+ ```
63
+
64
+ ### Data Fields
65
+
66
+ The data fields are the same among all splits.
67
+
68
+ - `pixel_values`: an `Image` feature.
69
+ - `num_patches`: a `Value(dtype="int64")` feature.
70
+
71
+ ### Data Splits
72
+
73
+ |train|
74
+ |:----|
75
+ |5400000|
76
+
77
+ ## Dataset Creation
78
+
79
+ ### Curation Rationale
80
+
81
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
82
+
83
+ ### Source Data
84
+
85
+ #### Initial Data Collection and Normalization
86
+
87
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
88
+
89
+ #### Who are the source language producers?
90
+
91
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
92
+
93
+ ### Annotations
94
+
95
+ #### Annotation process
96
+
97
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
98
+
99
+ #### Who are the annotators?
100
+
101
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
102
+
103
+ ### Personal and Sensitive Information
104
+
105
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
+
107
+ ## Considerations for Using the Data
108
+
109
+ ### Social Impact of Dataset
110
+
111
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
112
+
113
+ ### Discussion of Biases
114
+
115
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
116
+
117
+ ### Other Known Limitations
118
+
119
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
+
121
+ ## Additional Information
122
+
123
+ ### Dataset Curators
124
+
125
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
126
+
127
+ ### Licensing Information
128
+
129
+ The books have been crawled from smashwords.com, see their [terms of service](https://www.smashwords.com/about/tos) for more information.
130
+
131
+ A data sheet for this dataset has also been created and published in [Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus](https://arxiv.org/abs/2105.05241)
132
+
133
+ ### Citation Information
134
+
135
+ ```bibtex
136
+ @InProceedings{Zhu_2015_ICCV,
137
+ title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books},
138
+ author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja},
139
+ booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
140
+ month = {December},
141
+ year = {2015}
142
+ }
143
+ ```
144
+
145
+ ```bibtex
146
+ @article{rust-etal-2022-pixel,
147
+ title={Language Modelling with Pixels},
148
+ author={Phillip Rust and Jonas F. Lotz and Emanuele Bugliarello and Elizabeth Salesky and Miryam de Lhoneux and Desmond Elliott},
149
+ journal={arXiv preprint},
150
+ year={2022},
151
+ url={https://arxiv.org/abs/2207.06991}
152
+ }
153
+ ```
154
+
155
+
156
+ ### Contact Person
157
+
158
+ This dataset was added by Phillip Rust.
159
+
160
+ Github: [@xplip](https://github.com/xplip)
161
+
162
+ Twitter: [@rust_phillip](https://twitter.com/rust_phillip)