Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in normal download mode) to extract the first rows.
Error code:   NormalRowsError
Exception:    RuntimeError
Message:      Give up after 5 attempts with ConnectionError
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 986, in _wrap_create_connection
                  return await self._loop.create_connection(*args, **kwargs)  # type: ignore[return-value]  # noqa
                File "/usr/local/lib/python3.9/asyncio/base_events.py", line 1081, in create_connection
                  transport, protocol = await self._create_connection_transport(
                File "/usr/local/lib/python3.9/asyncio/base_events.py", line 1111, in _create_connection_transport
                  await waiter
                File "/usr/local/lib/python3.9/asyncio/sslproto.py", line 528, in data_received
                  ssldata, appdata = self._sslpipe.feed_ssldata(data)
                File "/usr/local/lib/python3.9/asyncio/sslproto.py", line 188, in feed_ssldata
                  self._sslobj.do_handshake()
                File "/usr/local/lib/python3.9/ssl.py", line 944, in do_handshake
                  self._sslobj.do_handshake()
              ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 391, in _info
                  await _file_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 768, in _file_info
                  r = await session.get(url, allow_redirects=ar, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/client.py", line 535, in _request
                  conn = await self._connector.connect(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 542, in connect
                  proto = await self._create_connection(req, traces, timeout)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 907, in _create_connection
                  _, proto = await self._create_direct_connection(req, traces, timeout)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 1206, in _create_direct_connection
                  raise last_exc
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 1175, in _create_direct_connection
                  transp, proto = await self._wrap_create_connection(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/connector.py", line 988, in _wrap_create_connection
                  raise ClientConnectorCertificateError(req.connection_key, exc) from exc
              aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host ixa2.si.ehu.es:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')]
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
                  rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
                File "/src/services/worker/src/worker/utils.py", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__
                  for key, example in self._iter():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter
                  yield from ex_iterable
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/doqa/5bc0cb24c4eeb61f2004f1e0ba9b4135cd5a814770b0a042dbf0725e1d2ad1eb/doqa.py", line 158, in _generate_examples
                  with open(filepath, encoding="utf-8") as f:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 67, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 453, in xopen
                  file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 441, in open
                  return open_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 273, in open_files
                  fs, fs_token, paths = get_fs_token_paths(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 606, in get_fs_token_paths
                  fs = filesystem(protocol, **inkwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 268, in filesystem
                  return cls(**storage_options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 76, in __call__
                  obj = super().__call__(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 59, in __init__
                  self.fo = fo.__enter__()  # the whole instance is a context
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 103, in __enter__
                  f = self.fs.open(self.path, mode=mode)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1034, in open
                  f = self._open(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 340, in _open
                  size = size or self.info(path, **kwargs)["size"]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 111, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 96, in sync
                  raise return_result
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 53, in _runner
                  result[0] = await coro
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 404, in _info
                  raise FileNotFoundError(url) from exc
              FileNotFoundError: https://ixa2.si.ehu.es/convai/doqa-v2.1.zip
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/first_rows.py", line 65, in get_rows
                  ds = load_dataset(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1746, in load_dataset
                  builder_instance.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1227, in _download_and_prepare
                  super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 771, in _download_and_prepare
                  split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/doqa/5bc0cb24c4eeb61f2004f1e0ba9b4135cd5a814770b0a042dbf0725e1d2ad1eb/doqa.py", line 112, in _split_generators
                  path = dl_manager.download_and_extract(_URL)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py", line 431, in download_and_extract
                  return self.extract(self.download(url_or_urls))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py", line 309, in download
                  downloaded_path_or_paths = map_nested(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 385, in map_nested
                  return function(data_struct)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py", line 335, in _download
                  return cached_path(url_or_filename, download_config=download_config)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 185, in cached_path
                  output_path = get_from_cache(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 533, in get_from_cache
                  raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
              ConnectionError: Couldn't reach https://ixa2.si.ehu.es/convai/doqa-v2.1.zip (SSLError(MaxRetryError("HTTPSConnectionPool(host='ixa2.si.ehu.es', port=443): Max retries exceeded with url: /convai/doqa-v2.1.zip (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')))")))
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/first_rows.py", line 345, in get_first_rows_response
                  rows = get_rows(
                File "/src/services/worker/src/worker/utils.py", line 128, in decorator
                  raise RuntimeError(f"Give up after {attempt} attempts with ConnectionError") from last_err
              RuntimeError: Give up after 5 attempts with ConnectionError

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for "doqa"

Dataset Summary

DoQA is a dataset for accessing Domain Specific FAQs via conversational QA that contains 2,437 information-seeking question/answer dialogues (10,917 questions in total) on three different domains: cooking, travel and movies. Note that we include in the generic concept of FAQs also Community Question Answering sites, as well as corporate information in intranets which is maintained in textual form similar to FAQs, often referred to as internal “knowledge bases”.

These dialogues are created by crowd workers that play the following two roles: the user who asks questions about a given topic posted in Stack Exchange (https://stackexchange.com/), and the domain expert who replies to the questions by selecting a short span of text from the long textual reply in the original post. The expert can rephrase the selected span, in order to make it look more natural. The dataset covers unanswerable questions and some relevant dialogue acts.

DoQA enables the development and evaluation of conversational QA systems that help users access the knowledge buried in domain specific FAQs.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

cooking

  • Size of downloaded dataset files: 4.00 MB
  • Size of the generated dataset: 10.79 MB
  • Total amount of disk used: 14.79 MB

An example of 'train' looks as follows.

This example was too long and was cropped:

{
    "answers": {
        "answer_start": [852],
        "text": ["CANNOTANSWER"]
    },
    "background": "\"So, over mixing batter forms gluten, which in turn hardens the cake. Fine.The problem is that I don't want lumps in the cakes, ...",
    "context": "\"Milk won't help you - it's mostly water, and gluten develops from flour (more accurately, specific proteins in flour) and water...",
    "followup": "n",
    "id": "C_64ce44d5f14347f488eb04b50387f022_q#2",
    "orig_answer": {
        "answer_start": [852],
        "text": ["CANNOTANSWER"]
    },
    "question": "Ok. What can I add to make it more softer and avoid hardening?",
    "title": "What to add to the batter of the cake to avoid hardening when the gluten formation can't be avoided?",
    "yesno": "x"
}

movies

  • Size of downloaded dataset files: 4.00 MB
  • Size of the generated dataset: 3.02 MB
  • Total amount of disk used: 7.02 MB

An example of 'test' looks as follows.

This example was too long and was cropped:

{
    "answers": {
        "answer_start": [852],
        "text": ["CANNOTANSWER"]
    },
    "background": "\"So, over mixing batter forms gluten, which in turn hardens the cake. Fine.The problem is that I don't want lumps in the cakes, ...",
    "context": "\"Milk won't help you - it's mostly water, and gluten develops from flour (more accurately, specific proteins in flour) and water...",
    "followup": "n",
    "id": "C_64ce44d5f14347f488eb04b50387f022_q#2",
    "orig_answer": {
        "answer_start": [852],
        "text": ["CANNOTANSWER"]
    },
    "question": "Ok. What can I add to make it more softer and avoid hardening?",
    "title": "What to add to the batter of the cake to avoid hardening when the gluten formation can't be avoided?",
    "yesno": "x"
}

travel

  • Size of downloaded dataset files: 4.00 MB
  • Size of the generated dataset: 3.07 MB
  • Total amount of disk used: 7.07 MB

An example of 'test' looks as follows.

This example was too long and was cropped:

{
    "answers": {
        "answer_start": [852],
        "text": ["CANNOTANSWER"]
    },
    "background": "\"So, over mixing batter forms gluten, which in turn hardens the cake. Fine.The problem is that I don't want lumps in the cakes, ...",
    "context": "\"Milk won't help you - it's mostly water, and gluten develops from flour (more accurately, specific proteins in flour) and water...",
    "followup": "n",
    "id": "C_64ce44d5f14347f488eb04b50387f022_q#2",
    "orig_answer": {
        "answer_start": [852],
        "text": ["CANNOTANSWER"]
    },
    "question": "Ok. What can I add to make it more softer and avoid hardening?",
    "title": "What to add to the batter of the cake to avoid hardening when the gluten formation can't be avoided?",
    "yesno": "x"
}

Data Fields

The data fields are the same among all splits.

cooking

  • title: a string feature.
  • background: a string feature.
  • context: a string feature.
  • question: a string feature.
  • id: a string feature.
  • answers: a dictionary feature containing:
    • text: a string feature.
    • answer_start: a int32 feature.
  • followup: a string feature.
  • yesno: a string feature.
  • orig_answer: a dictionary feature containing:
    • text: a string feature.
    • answer_start: a int32 feature.

movies

  • title: a string feature.
  • background: a string feature.
  • context: a string feature.
  • question: a string feature.
  • id: a string feature.
  • answers: a dictionary feature containing:
    • text: a string feature.
    • answer_start: a int32 feature.
  • followup: a string feature.
  • yesno: a string feature.
  • orig_answer: a dictionary feature containing:
    • text: a string feature.
    • answer_start: a int32 feature.

travel

  • title: a string feature.
  • background: a string feature.
  • context: a string feature.
  • question: a string feature.
  • id: a string feature.
  • answers: a dictionary feature containing:
    • text: a string feature.
    • answer_start: a int32 feature.
  • followup: a string feature.
  • yesno: a string feature.
  • orig_answer: a dictionary feature containing:
    • text: a string feature.
    • answer_start: a int32 feature.

Data Splits

cooking

train validation test
cooking 4612 911 1797

movies

test
movies 1884

travel

test
travel 1713

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information


@misc{campos2020doqa,
    title={DoQA -- Accessing Domain-Specific FAQs via Conversational QA},
    author={Jon Ander Campos and Arantxa Otegi and Aitor Soroa and Jan Deriu and Mark Cieliebak and Eneko Agirre},
    year={2020},
    eprint={2005.01328},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Contributions

Thanks to @mariamabarham, @thomwolf, @lhoestq for adding this dataset.