Datasets:
The dataset viewer is not available for this split.
Error code: StreamingRowsError Exception: FileNotFoundError Message: https://zenodo.org/api/files/6a702b1c-7856-4a3c-9f48-895f8aa42b7e/fontgroupsdataset-a.zip Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 417, in _info await _file_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 837, in _file_info r.raise_for_status() File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1005, in raise_for_status raise ClientResponseError( aiohttp.client_exceptions.ClientResponseError: 429, message='Too Many Requests', url=URL('https://zenodo.org/api/files/6a702b1c-7856-4a3c-9f48-895f8aa42b7e/fontgroupsdataset-a.zip') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 327, in get_rows_or_raise return get_rows( File "/src/services/worker/src/worker/utils.py", line 271, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 307, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 937, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/tmp/modules-cache/datasets_modules/datasets/biglam--early_printed_books_font_detection/9235bfffae98d5e462f35ca37e6f94e79e1ff1d624dbcf4cad71718b669c6abf/early_printed_books_font_detection.py", line 119, in _generate_examples for file in Path(directory).rglob("*"): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 667, in glob fs, *_ = fsspec.get_fs_token_paths(xjoin(posix_path, pattern), storage_options=storage_options) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 586, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 253, in filesystem return cls(**storage_options) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 76, in __call__ obj = super().__call__(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 58, in __init__ self.fo = fo.__enter__() # the whole instance is a context File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 102, in __enter__ f = self.fs.open(self.path, mode=mode) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1151, in open f = self._open( File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 356, in _open size = size or self.info(path, **kwargs)["size"] File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 115, in wrapper return sync(self.loop, func, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 100, in sync raise return_result File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 55, in _runner result[0] = await coro File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 430, in _info raise FileNotFoundError(url) from exc FileNotFoundError: https://zenodo.org/api/files/6a702b1c-7856-4a3c-9f48-895f8aa42b7e/fontgroupsdataset-a.zip
Need help to make the dataset viewer work? Open a discussion for direct support.
Dataset Card for Early Printed Books Font Detection Dataset
Dataset Summary
This dataset is composed of photos of various resolution of 35'623 pages of printed books dating from the 15th to the 18th century. Each page has been attributed by experts from one to five labels corresponding to the font groups used in the text, with two extra-classes for non-textual content and fonts not present in the following list: Antiqua, Bastaπrda, Fraktur, Gotico Antiqua, Greek, Hebrew, Italic, Rotunda, Schwabacher, and Textura.
[More Information Needed]
Supported Tasks and Leaderboards
The primary use case for this datasets is
multi-label-image-classification
: This dataset can be used to train a model for multi label image classification where each image can have one, or more labels.image-classification
: This dataset could also be adapted to only predict a single label for each image
Languages
The dataset includes books from a range of libraries (see below for further details). The paper doesn't provide a detailed overview of language breakdown. However, the books are from the 15th-18th century and appear to be dominated by European languages from that time period. The dataset also includes Hebrew.
[More Information Needed]
Dataset Structure
This dataset has a single configuration.
Data Instances
An example instance from this dataset:
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=3072x3840 at 0x7F6AC192D850>,
'labels': [5]}
Data Fields
This dataset contains two fields:
image
: the image of the book pagelabels
: one or more labels for the font used in the book page depicted in theimage
Data Splits
The dataset is broken into a train and test split with the following breakdown of number of examples:
- train: 24,866
- test: 10,757
Dataset Creation
Curation Rationale
The dataset was created to help train and evaluate automatic methods for font detection. The paper describing the paper also states that:
data was cherry-picked, thus it is not statistically representative of what can be found in libraries. For example, as we had a small amount of Textura at the start, we specifically looked for more pages containing this font group, so we can expect that less than 3.6 % of randomly selected pages from libraries would contain Textura.
Source Data
Initial Data Collection and Normalization
The images in this dataset are from books held by the British Library (London), Bayerische Staatsbibliothek München, Staatsbibliothek zu Berlin, Universitätsbibliothek Erlangen, Universitätsbibliothek Heidelberg, Staats- und Universitäatsbibliothek Göttingen, Stadt- und Universitätsbibliothek Köln, Württembergische Landesbibliothek Stuttgart and Herzog August Bibliothek Wolfenbüttel.
[More Information Needed]
Who are the source language producers?
[More Information Needed]
Annotations
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
[More Information Needed]
Citation Information
[More Information Needed]
Contributions
Thanks to @github-username for adding this dataset.
- Downloads last month
- 1