Datasets:
SBB
/

File size: 6,314 Bytes
43d2464
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
deba89c
43d2464
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1ad65a1
 
 
 
43d2464
 
 
1ad65a1
43d2464
 
 
deba89c
 
 
ee0cea1
 
 
 
 
 
 
 
 
 
 
 
 
 
43d2464
 
 
 
 
 
1ad65a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43d2464
 
 
1ad65a1
 
 
 
 
 
 
43d2464
 
 
 
1ad65a1
43d2464
 
 
 
 
 
 
 
 
 
 
080c62c
 
43d2464
 
 
 
 
 
 
 
 
 
080c62c
 
 
 
 
43d2464
 
 
 
 
 
 
 
080c62c
 
43d2464
 
 
 
 
 
 
 
 
 
080c62c
 
43d2464
 
 
 
 
 
 
 
 
 
58d11fb
43d2464
 
 
d7ad88c
43d2464
 
 
58d11fb
 
 
 
 
 
 
 
 
 
 
 
 
 
43d2464
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
---
annotations_creators:
- machine-generated
language:
- de
- nl
- en
- fr
- es
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: Berlin State Library OCR
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- ocr
- library
task_categories:
- fill-mask
- text-generation
task_ids:
- masked-language-modeling
- language-modeling
---

# Dataset Card for Berlin State Library OCR data

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**

### Dataset Summary

> The digital collections of the SBB contain 153,942 digitized works from the time period of 1470 to 1945.

> At the time of publication, 28,909 works have been OCR-processed resulting in 4,988,099 full-text pages.
For each page with OCR text, the language has been determined by langid (Lui/Baldwin 2012).

### Supported Tasks and Leaderboards

This dataset is useful for training language models on historical/OCR'd text.

### Languages

The collection includes material across a large number of languages. The languages of the OCR text have been detected using [langid.py: An Off-the-shelf Language Identification Tool](https://aclanthology.org/P12-3005) (Lui & Baldwin, ACL 2012). The dataset includes a confidence score for the language prediction. **Note:** not all examples may have been successfully matched to the language prediction table from the original data. 

The frequency of the top ten languages in the dataset is shown below: 

|    |        frequency |
|----|------------------|
| de |      3.20963e+06 |
| nl | 491322           |
| en | 473496           |
| fr | 216210           |
| es |  68869           |
| lb |  33625           |
| la |  27397           |
| pl |  17458           |
| it |  16012           |
| zh |  11971           |

[More Information Needed]

## Dataset Structure

### Data Instances

Each example represents a single page of OCR'd text. 

A single example of the dataset is as follows:

```python
{'file name': '00000045.xml',
 'language': 'fr',
 'language_confidence': 0.9999999999910871,
 'ppn': '646426230',
 'text': 'Fig. 156 Tirant les sorts au moyen de la divination de Wen-wang',
 'wc': [0.6125000119,
  0.4799999893,
  0.7916666865,
  0.8066666722,
  0.7720000148,
  0.5849999785,
  0.7580000162,
  0.9200000167,
  0.6449999809,
  0.6060000062,
  0.6549999714,
  0.6362500191]}
  ```
  

### Data Fields

- 'file name': filename of the original XML file 
- 'text': OCR'd text for that page of the item
- 'wc': the word confidence for each token predicted by the OCR engine 
- 'ppn': 'Pica production numbers' an internal ID used by the library.  See [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.2702544.svg)](https://doi.org/10.5281/zenodo.2702544) for more details. 
 'language': language predicted by `langid.py` (see above for more details) 
 -'language_confidence': confidence score given by `langid.py`

[More Information Needed]

### Data Splits

This dataset contains only a single split `train`. 

## Dataset Creation

### Curation Rationale

[More Information Needed]

### Source Data

#### Initial Data Collection and Normalization

This dataset includes text content produced through running Optical Character Recognition across 153,942 digitized works held by the Berlin State Library. 

[More Information Needed]

#### Who are the source language producers?

[More Information Needed]

### Annotations

#### Annotation process

This dataset contains machine-produced annotations for:

- the confidence scores the OCR engines used to produce the full-text materials. 
- the predicted languages and associated confidence scores produced by `langid.py`

[More Information Needed]

#### Who are the annotators?

[More Information Needed]

### Personal and Sensitive Information

This dataset contains historical material, which may include names, addresses etc, but these are not likely to refer to living individuals. 

[More Information Needed]

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

As with any historical material, the views and attitudes expressed in some texts will likely diverge from contemporary beliefs. One should consider carefully how this potential bias may become reflected in language models trained on this data.  

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

Labusch, Kai; Zellhöfer, David

### Licensing Information

[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)

### Citation Information

```
@dataset{labusch_kai_2019_3257041,
  author       = {Labusch, Kai and
                  Zellhöfer, David},
  title        = {{OCR fulltexts of the Digital Collections of the 
                   Berlin State Library (DC-SBB)}},
  month        = jun,
  year         = 2019,
  publisher    = {Zenodo},
  version      = {1.0},
  doi          = {10.5281/zenodo.3257041},
  url          = {https://doi.org/10.5281/zenodo.3257041}
}
```

### Contributions

Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.