Dataset Preview Go to dataset viewer
The dataset preview is not available for this dataset.
Cannot get the split names for the dataset.
Error code:   SplitsNamesError
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 376, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/tmp/modules-cache/datasets_modules/datasets/uva-irlab--trec-cast-2019-multi-turn/1e9eb961bb770ddc5e193fe79515b87479535e463e8d0871f59573470cfb4bc6/trec-cast-2019-multi-turn.py", line 157, in _split_generators
                  downloaded_files = dl_manager.download_and_extract(my_urls)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 946, in download_and_extract
                  return self.extract(self.download(url_or_urls))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 909, in extract
                  urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 428, in map_nested
                  mapped = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 429, in <listcomp>
                  _single_map_nested((function, obj, types, None, True, None))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 330, in _single_map_nested
                  return function(data_struct)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 914, in _extract
                  protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 390, in _get_extraction_protocol
                  raise NotImplementedError(
              NotImplementedError: Extraction protocol for TAR archives like 'https://msmarco.blob.core.windows.net/msmarcoranking/collection.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/splits.py", line 79, in get_splits_response
                  split_full_names = get_dataset_split_full_names(dataset, hf_token)
                File "/src/services/worker/src/worker/responses/splits.py", line 39, in get_dataset_split_full_names
                  return [
                File "/src/services/worker/src/worker/responses/splits.py", line 42, in <listcomp>
                  for split in get_dataset_split_names(dataset, config, use_auth_token=hf_token)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 426, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 381, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Open an discussion for direct support.

YAML Metadata Error: "languages" is deprecated. Use "language" instead.

TREC Cast 2019

TREC Cast have released a document collection with topics and qrels of which a subset has been annotated such that it is suitable for multi-turn conversational search.

Dataset statistics

  • Passages: 38,426,252

  • Topics: 20

  • Queries: 173

Subsets

CAR + MSMARCO Collection

Together CAR and MSMARCO have a size of 6,13G, so downloading will take a while. You can use the collection as followed:

collection = load_dataset('trec-cast-2019-multi-turn', 'test_collection')

The collection has the following data format:

docno: str
  The document id format is [collection_id_paragraph_id] with collection id and paragraph id separated by an underscore.
  The collection ids are in the set: {MARCO, CAR}. E.g.: CAR_6869dee46ab12f0f7060874f7fc7b1c57d53144a
text: str
  The content of the passage.

Sample

Instead of using the entire data set, you can also download a sample set containing only 200,000 items:

collection = load_dataset('trec-cast-2019-multi-turn', 'test_collection_sample')

Topics

You can get the topics as followed:

topics = load_dataset('trec-cast-2019-multi-turn', 'topics')

The topics have the following dataformat:

qid: str
  Query ID of the format "topicId_questionNumber"
history: str[]
  A list of queries. It can be empty for the first question in a topic.
query: str
  The query

Qrels

You can get the qrels as followed:

qrels = load_dataset('trec-cast-2019-multi-turn', 'qrels')

The qrels have the following data format:

qid: str
  Query ID of the format "topicId_questionNumber"
qrels: List[dict]
  A list of dictionaries with the keys 'docno' and 'relevance'. Relevance is an integer in the range [0, 4]
Edit dataset card
Evaluate models HF Leaderboard