The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ValueError
Message:      Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): (None, {}), NamedSplit('validation'): ('json', {}), NamedSplit('test'): (None, {})}
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1512, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1489, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1054, in get_module
                  module_name, default_builder_kwargs = infer_module_for_data_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 513, in infer_module_for_data_files
                  raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}")
              ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): (None, {}), NamedSplit('validation'): ('json', {}), NamedSplit('test'): (None, {})}

Need help to make the dataset viewer work? Open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

AToMiC Prebuilt Indexes

Example Usage:

Reproduction

Toolkits: https://github.com/TREC-AToMiC/AToMiC/tree/main/examples/dense_retriever_baselines

# Skip the encode and index steps, search with the prebuilt indexes and topics directly

python search.py \
    --topics topics/openai.clip-vit-base-patch32.text.validation \
    --index indexes/openai.clip-vit-base-patch32.image.faiss.flat \
    --hits 1000 \
    --output runs/run.openai.clip-vit-base-patch32.validation.t2i.large.trec

python search.py \
    --topics topics/openai.clip-vit-base-patch32.image.validation \
    --index indexes/openai.clip-vit-base-patch32.text.faiss.flat \
    --hits 1000 \
    --output runs/run.openai.clip-vit-base-patch32.validation.i2t.large.trec

Explore AToMiC datasets

import torch
from pathlib import Path
from datasets import load_dataset
from transformers import AutoModel, AutoProcessor

INDEX_DIR='indexes'
INDEX_NAME='openai.clip-vit-base-patch32.image.faiss.flat'
QUERY = 'Elizabeth II'

images = load_dataset('TREC-AToMiC/AToMiC-Images-v0.2', split='train') 
images.load_faiss_index(index_name=INDEX_NAME, file=Path(INDEX_DIR, INDEX_NAME, 'index'))

model = AutoModel.from_pretrained('openai/clip-vit-base-patch32')
processor = AutoProcessor.from_pretrained('openai/clip-vit-base-patch32')

# prebuilt indexes contain L2-normalized vectors
with torch.no_grad():
    q_embedding = model.get_text_features(**processor(text=query, return_tensors="pt"))
    q_embedding = torch.nn.functional.normalize(q_embedding, dim=-1).detach().numpy()

scores, retrieved = images.get_nearest_examples(index_name, q_embedding, k=10)
Downloads last month
0
Edit dataset card