The dataset viewer is not available for this split.
Error code: NormalRowsError Exception: DatasetGenerationError Message: An error occurred while generating the dataset Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 323, in compute compute_first_rows_from_parquet_response( File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response rows_index = indexer.get_rows_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 631, in get_rows_index return RowsIndex( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 512, in __init__ self.parquet_index = self._init_parquet_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 529, in _init_parquet_index response = get_previous_step_or_raise( File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 566, in get_previous_step_or_raise raise CachedArtifactError( libcommon.simple_cache.CachedArtifactError: The previous step failed. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 92, in get_rows_or_raise return get_rows( File "/src/libs/libcommon/src/libcommon/utils.py", line 183, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 69, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__ for key, pa_table in self.generate_tables_fn(**self.kwargs): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 86, in _generate_tables parquet_file = pq.ParquetFile(f) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 341, in __init__ self.reader.open( File "pyarrow/_parquet.pyx", line 1262, in pyarrow._parquet.ParquetReader.open File "pyarrow/types.pxi", line 88, in pyarrow.lib._datatype_to_pep3118 File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow_hotfix/__init__.py", line 47, in __arrow_ext_deserialize__ raise RuntimeError( RuntimeError: Disallowed deserialization of 'arrow.py_extension_type': storage_type = struct<bytes: binary, path: string> serialized = b'\x80\x04\x955\x00\x00\x00\x00\x00\x00\x00\x8c\x17datasets.features.image\x94\x8c\x12ImageExtensionType\x94\x93\x94)R\x94.' pickle disassembly: 0: \x80 PROTO 4 2: \x95 FRAME 53 11: \x8c SHORT_BINUNICODE 'datasets.features.image' 36: \x94 MEMOIZE (as 0) 37: \x8c SHORT_BINUNICODE 'ImageExtensionType' 57: \x94 MEMOIZE (as 1) 58: \x93 STACK_GLOBAL 59: \x94 MEMOIZE (as 2) 60: ) EMPTY_TUPLE 61: R REDUCE 62: \x94 MEMOIZE (as 3) 63: . STOP highest protocol among opcodes = 4 Reading of untrusted Parquet or Feather files with a PyExtensionType column allows arbitrary code execution. If you trust this file, you can enable reading the extension type by one of: - upgrading to pyarrow >= 14.0.1, and call `pa.PyExtensionType.set_auto_load(True)` - disable this error by running `import pyarrow_hotfix; pyarrow_hotfix.uninstall()` We strongly recommend updating your Parquet/Feather files to use extension types derived from `pyarrow.ExtensionType` instead, and register this type explicitly. See https://arrow.apache.org/docs/dev/python/extending_types.html#defining-extension-types-user-defined-types for more details. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single for _, table in generator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 86, in _generate_tables parquet_file = pq.ParquetFile(f) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 341, in __init__ self.reader.open( File "pyarrow/_parquet.pyx", line 1262, in pyarrow._parquet.ParquetReader.open File "pyarrow/types.pxi", line 88, in pyarrow.lib._datatype_to_pep3118 File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow_hotfix/__init__.py", line 47, in __arrow_ext_deserialize__ raise RuntimeError( RuntimeError: Disallowed deserialization of 'arrow.py_extension_type': storage_type = struct<bytes: binary, path: string> serialized = b'\x80\x04\x955\x00\x00\x00\x00\x00\x00\x00\x8c\x17datasets.features.image\x94\x8c\x12ImageExtensionType\x94\x93\x94)R\x94.' pickle disassembly: 0: \x80 PROTO 4 2: \x95 FRAME 53 11: \x8c SHORT_BINUNICODE 'datasets.features.image' 36: \x94 MEMOIZE (as 0) 37: \x8c SHORT_BINUNICODE 'ImageExtensionType' 57: \x94 MEMOIZE (as 1) 58: \x93 STACK_GLOBAL 59: \x94 MEMOIZE (as 2) 60: ) EMPTY_TUPLE 61: R REDUCE 62: \x94 MEMOIZE (as 3) 63: . STOP highest protocol among opcodes = 4 Reading of untrusted Parquet or Feather files with a PyExtensionType column allows arbitrary code execution. If you trust this file, you can enable reading the extension type by one of: - upgrading to pyarrow >= 14.0.1, and call `pa.PyExtensionType.set_auto_load(True)` - disable this error by running `import pyarrow_hotfix; pyarrow_hotfix.uninstall()` We strongly recommend updating your Parquet/Feather files to use extension types derived from `pyarrow.ExtensionType` instead, and register this type explicitly. See https://arrow.apache.org/docs/dev/python/extending_types.html#defining-extension-types-user-defined-types for more details. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 120, in get_rows_or_raise return get_rows( File "/src/libs/libcommon/src/libcommon/utils.py", line 183, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 53, in get_rows ds = load_dataset( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for severo/embellishments
Test: link to a space:
https://huggingface.co/spaces/severo/voronoi-cloth
https://severo-voronoi-cloth.hf.space
Dataset Summary
This small dataset contains the thumbnails of the first 100 entries of Digitised Books - Images identified as Embellishments. c. 1510 - c. 1900. JPG. It has been uploaded to the Hub to reproduce the tutorial by Daniel van Strien: Using 🤗 datasets for image search.
Dataset Structure
Data Instances
A typical row contains an image thumbnail, its filename, and the year of publication of the book it was extracted from.
An example looks as follows:
{
'fname': '000811462_05_000205_1_The Pictorial History of England being a history of the people as well as a hi_1855.jpg',
'year': '1855',
'path': 'embellishments/1855/000811462_05_000205_1_The Pictorial History of England being a history of the people as well as a hi_1855.jpg',
'img': ...
}
Data Fields
fname
: the image filename.year
: a string with the year of publication of the book from which the image has been extractedpath
: local path to the imageimg
: a thumbnail of the image with a max height and width of 224 pixels
Data Splits
The dataset only contains 100 rows, in a single 'train' split.
Dataset Creation
Curation Rationale
This dataset was chosen by Daniel van Strien for his tutorial Using 🤗 datasets for image search, which includes the code in Python to do it.
Source Data
Initial Data Collection and Normalization
As stated on the British Library webpage:
The images were algorithmically gathered from 49,455 digitised books, equating to 65,227 volumes (25+ million pages), published between c. 1510 - c. 1900. The books cover a wide range of subject areas including philosophy, history, poetry and literature. The images are in .JPEG format.d BCP-47 code is
en
.
Who are the source data producers?
British Library, British Library Labs, Adrian Edwards (Curator), Neil Fitzgerald (Contributor ORCID)
Annotations
The dataset does not contain any additional annotations.
Annotation process
[N/A]
Who are the annotators?
[N/A]
Personal and Sensitive Information
[N/A]
Considerations for Using the Data
Social Impact of Dataset
[N/A]
Discussion of Biases
[N/A]
Other Known Limitations
This is a toy dataset that aims at:
- validating the process described in the tutorial Using 🤗 datasets for image search by Daniel van Strien,
- showing the dataset viewer on an image dataset.
Additional Information
Dataset Curators
The dataset was created by Sylvain Lesage at Hugging Face, to replicate the tutorial Using 🤗 datasets for image search by Daniel van Strien.
Licensing Information
CC0 1.0 Universal Public Domain
- Downloads last month
- 49