The dataset viewer is not available for this dataset.
The dataset tries to import a module that is not installed.
Error code:   DatasetModuleNotInstalledError
Exception:    ImportError
Message:      To be able to use olmer/wiki_mpnet_index, you need to install the following dependencies: faiss, sentence_transformers.
Please install them using 'pip install faiss sentence_transformers' for instance.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 55, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1495, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1464, in dataset_module_factory
                  return HubDatasetModuleFactoryWithScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1185, in get_module
                  local_imports = _download_additional_modules(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 294, in _download_additional_modules
                  raise ImportError(
              ImportError: To be able to use olmer/wiki_mpnet_index, you need to install the following dependencies: faiss, sentence_transformers.
              Please install them using 'pip install faiss sentence_transformers' for instance.

Need help to make the dataset viewer work? Open a discussion for direct support.

Semantic search over the 44 million of English Wikipedia paragraphs using sentence transformers encoder.

The dataset contains:

  • 43 911 155 paragraphs from 6 458 670 wikipedia articles stored in a zip archive;
  • FAISS index with the embeddings;
  • Retriever module for semantic search over the paragraphs.

The size of each paragraph varies from 20 to 2000 characters.
The embedding vector size is 768.
The index is 4-bit-quantized 2-level IVF16384_HNSW32 constructed with the FAISS library.
Sentence encoder: all-mpnet-base-v2.

Downloads last month
0
Edit dataset card