The dataset viewer is not available for this subset.
Couldn't get the size of external files in `_split_generators` because a request failed: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=10.0) Please consider moving your data files in this dataset repository instead (e.g. inside a data/ folder).
Exception:    ReadTimeout
Message:      HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=10.0)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 466, in _make_request
                  six.raise_from(e, None)
                File "<string>", line 3, in raise_from
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 461, in _make_request
                  httplib_response = conn.getresponse()
                File "/usr/local/lib/python3.9/http/client.py", line 1377, in getresponse
                  response.begin()
                File "/usr/local/lib/python3.9/http/client.py", line 320, in begin
                  version, status, reason = self._read_status()
                File "/usr/local/lib/python3.9/http/client.py", line 281, in _read_status
                  line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
                File "/usr/local/lib/python3.9/socket.py", line 704, in readinto
                  return self._sock.recv_into(b)
                File "/usr/local/lib/python3.9/ssl.py", line 1242, in recv_into
                  return self.read(nbytes, buffer)
                File "/usr/local/lib/python3.9/ssl.py", line 1100, in read
                  return self._sslobj.read(len, buffer)
              socket.timeout: The read operation timed out
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 486, in send
                  resp = conn.urlopen(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 798, in urlopen
                  retries = retries.increment(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/util/retry.py", line 550, in increment
                  raise six.reraise(type(error), error, _stacktrace)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/packages/six.py", line 770, in reraise
                  raise value
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 714, in urlopen
                  httplib_response = self._make_request(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 468, in _make_request
                  self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 357, in _raise_timeout
                  raise ReadTimeoutError(
              urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=10.0)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 488, in _is_too_big_from_external_data_files
                  for i, size in enumerate(pool.imap_unordered(get_size, ext_data_files)):
                File "/usr/local/lib/python3.9/multiprocessing/pool.py", line 870, in next
                  raise value
                File "/usr/local/lib/python3.9/multiprocessing/pool.py", line 125, in worker
                  result = (True, func(*args, **kwds))
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 386, in _request_size
                  response = http_head(url, headers=headers, max_retries=3)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 429, in http_head
                  response = _request_with_retry(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 328, in _request_with_retry
                  response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/api.py", line 59, in request
                  return session.request(method=method, url=url, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
                  resp = self.send(prep, **send_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 725, in send
                  history = [resp for resp in gen]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 725, in <listcomp>
                  history = [resp for resp in gen]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 266, in resolve_redirects
                  resp = self.send(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
                  r = adapter.send(request, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 532, in send
                  raise ReadTimeout(e, request=request)
              requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=10.0)

Need help to make the dataset viewer work? Open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

QA-Align

This dataset contains QA-Alignments --- fine-grained annotations of cross-text content overlap. The task input is two sentences from two documents, roughly talking about the same event, along with their QA-SRL annotations which capture verbal predicate-argument relations in question-answer format. The output is a cross-sentence alignment between sets of QAs which denote the same information.

See the paper for details: QA-Align: Representing Cross-Text Content Overlap by Aligning Question-Answer Propositions, Brook Weiss et. al., EMNLP 2021.

The script downloads the data from the original GitHub repository.

Format

The dataset contains the following important features:

  • abs_sent_id_1, abs_sent_id_2 - unique sentence ids, unique across all data sources.
  • text_1, text_2, prev_text_1, prev_text_2 - the two candidate sentences for alignments. The "prev" (previous) sentences are for context (shown to workers and for the model).
  • qas_1, qas_2 - the sets of QASRL QAs for each sentence. For test and dev they were created by workers, while in train, the QASRL parser generated them.
  • alignments - the aligned QAs that workers have matched. This is the list of qa-alignments, where a single alignment looks like this:
{'sent1': [{'qa_uuid': '33_1ecbplus~!~8~!~195~!~12~!~charged~!~4082',
    'verb': 'charged',
    'verb_idx': 12,
    'question': 'Who was charged?',
    'answer': 'the two youths',
    'answer_range': '9:11'}],
  'sent2': [{'qa_uuid': '33_8ecbplus~!~3~!~328~!~11~!~accused~!~4876',
    'verb': 'accused',
    'verb_idx': 11,
    'question': 'Who was accused of something?',
    'answer': 'two men',
    'answer_range': '9:10'}]}

Where the for each sentence, we save a list of the aligned QAs from that sentence.

Note that this single alignment may contain multiple QAs for each sentence. While 96% of the data are one-to-one alignments, 4% contain many-to-many alignment (although most of the time it's a 2-to-1).

Downloads last month
160
Edit dataset card