Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    DatasetGenerationError
Message:      An error occurred while generating the dataset
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1016, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 1869, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1016, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 1869, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Open a discussion for direct support.

_data_files
list
_fingerprint
string
_format_columns
sequence
_format_kwargs
dict
_format_type
null
_output_all_columns
bool
_split
null
[ { "filename": "data-00000-of-00001.arrow" } ]
12b9c39b04f1ee82
[ "answers.answer_start", "answers.text", "context", "feat_id", "feat_title", "question" ]
{}
null
false
null

AutoTrain Dataset for project: demo-train-project

Dataset Description

This dataset has been automatically processed by AutoTrain for project demo-train-project.

Languages

The BCP-47 code for the dataset's language is en.

Dataset Structure

Data Instances

A sample from this dataset looks as follows:

[
  {
    "context": "Users have the right to, if necessary, rectification of inaccurate personal data concerning that User, via a written request, using the contact details in paragraph 9 below. The User has the right to demand deletion or restriction of processing, and the right to object to processing based on legitimate interest under certain circumstances. The User has the right to revoke any consent to processing that has been given by the User to Controller. Using this right may however, mean that the User can not apply for a specific job or otherwise use the Service. The User has under certain circumstances a right to data portability, which means a right to get the personal data and transfer these to another controller as long as this does not negatively affect the rights and freedoms of others. User has the right to lodge a complaint to the supervisory authority regarding the processing of personal data relating to him or her, if the User considers that the processing of personal data infringes the legal framework of privacy law. 4.",
    "question": "Can I edit or change the data that I have provided to you? ",
    "answers.text": [
      "Users have the right to, if necessary, rectification of inaccurate personal data concerning that User, via a written request, using the contact details"
    ],
    "answers.answer_start": [
      0
    ],
    "feat_id": [
      "310276"
    ],
    "feat_title": [
      ""
    ]
  },
  {
    "context": "The lawful basis is our legitimate interest in being able to administer our business and thereby provide Our Services (Article 6(1)(f) GDPR). Insurance companies. The purpose for these transfers is to handle insurance claims and administer Our insurance policies. The lawful basis is our legitimate interest in handling insurance claims and administrating Our insurance policies on an ongoing basis (Article 6(1)(f) GDPR). Courts and Counter Parties in legal matters. The purpose for these transfers is to defend, exercise and establish legal claims. The lawful basis is Our legitimate interest to defend, exercise and establish legal claims (Article 6(1)(f) GDPR). Regulators: to comply with all applicable laws, regulations and rules, and requests of law enforcement, regulatory and other governmental agencies;\nSolicitors and other professional services firms (including our auditors). Law enforcement agencies, including the Police. The purpose for these transfers is to assist law enforcement agencies and the Police in its investigations, to the extent we are obligated to do so.",
    "question": "What is the lawful basis of the processing of my data? ",
    "answers.text": [
      "The lawful basis is our legitimate interest in being able to administer our business and thereby provide Our Services (Article 6(1)(f) GDPR)."
    ],
    "answers.answer_start": [
      0
    ],
    "feat_id": [
      "310267"
    ],
    "feat_title": [
      ""
    ]
  }
]

Dataset Fields

The dataset has the following fields (also called "features"):

{
  "context": "Value(dtype='string', id=None)",
  "question": "Value(dtype='string', id=None)",
  "answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
  "answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)",
  "feat_id": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
  "feat_title": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)"
}

Dataset Splits

This dataset is split into a train and validation split. The split sizes are as follow:

Split name Num samples
train 456
valid 114
Downloads last month
0
Edit dataset card