Dataset Viewer
View in Dataset Viewer
Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code: StreamingRowsError Exception: OSError Message: cannot find loader for this HDF5 file Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 323, in compute compute_first_rows_from_parquet_response( File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response rows_index = indexer.get_rows_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 631, in get_rows_index return RowsIndex( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 512, in __init__ self.parquet_index = self._init_parquet_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 529, in _init_parquet_index response = get_previous_step_or_raise( File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 539, in get_previous_step_or_raise raise CachedArtifactError( libcommon.simple_cache.CachedArtifactError: The previous step failed. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 92, in get_rows_or_raise return get_rows( File "/src/libs/libcommon/src/libcommon/utils.py", line 183, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 69, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1393, in __iter__ example = _apply_feature_types_on_example( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1082, in _apply_feature_types_on_example decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1977, in decode_example return { File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1978, in <dictcomp> column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1343, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/image.py", line 188, in decode_example image.load() # to avoid "Too many open files" errors File "/src/services/worker/.venv/lib/python3.9/site-packages/PIL/ImageFile.py", line 366, in load raise OSError(msg) OSError: cannot find loader for this HDF5 file
Need help to make the dataset viewer work? Open a discussion for direct support.
Dataset Card for Insight Face Embeddings
Dataset Summary
This dataset contains face embeddings generated by the InsightFace model from the FaceData dataset. The embeddings are stored in .h5
files, with a total dataset size of 139GB. This dataset is useful for face recognition and verification tasks.
Dataset Structure
Data Instances
Each data instance consists of an embedding vector generated by the InsightFace model, along with the corresponding image. The embeddings and images are stored in HDF5 (.h5
) files, where each file corresponds to a batch of processed images from the FaceData dataset.
Data Fields
Each .h5
file contains:
- /images/{image_name}: The resized and cropped image corresponding to the face.
- /embeddings/{image_name}: The embedding vector for the face.
- /filename_{image_name}: The original filename of the image, stored as an attribute.
Dataset Creation
Source Data
- Original Dataset: The embeddings are generated from the FaceData dataset, which contains face images used for training and evaluation.
- Model: InsightFace model was used to generate the embeddings.
Preprocessing
The preprocessing steps include:
- Loading the images from the FaceData dataset.
- Detecting faces using the InsightFace model.
- Cropping and resizing the detected faces to a standard size.
- Generating embeddings for the detected faces.
- Storing the images and embeddings in HDF5 files.
Example
Here is an example of how to load and use the data:
import h5py
import numpy as np
def load_hdf5_data(hdf5_file):
with h5py.File(hdf5_file, 'r') as h5f:
images = {}
embeddings = {}
filenames = []
for key in h5f['images'].keys():
image_name = h5f.attrs[f'filename_{key}']
img = np.array(h5f[f'images/{key}'])
embedding = np.array(h5f[f'embeddings/{key}'])
images[image_name] = img
embeddings[image_name] = embedding
filenames.append(image_name)
return images, embeddings, filenames
# Example usage
hdf5_file = 'face_embeddings26.h5'
images, embeddings, filenames = load_hdf5_data(hdf5_file)
print("Loaded data from HDF5 file:")
print(f"Filename: {filenames[0]}")
print(f"Image shape: {images[filenames[0]].shape}")
print(f"Embedding shape: {embeddings[filenames[0]].shape}")
- Downloads last month
- 3