Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

_data_files
list
_fingerprint
string
_format_columns
sequence
_format_kwargs
dict
_format_type
null
_indexes
dict
_output_all_columns
bool
_split
null
[ { "filename": "dataset.arrow" } ]
669b32ef9c8d6a9a
[ "feat_f_00", "feat_f_01", "feat_f_02", "feat_f_03", "feat_f_04", "feat_f_05", "feat_f_06", "feat_f_07", "feat_f_08", "feat_f_09", "feat_f_10", "feat_f_11", "feat_f_12", "feat_f_13", "feat_f_14", "feat_f_15", "feat_f_16", "feat_f_17", "feat_f_18", "feat_f_19", "feat_f_20", "feat_f_21", "feat_f_22", "feat_f_23", "feat_f_24", "feat_f_25", "feat_f_26", "feat_f_27", "feat_f_28", "feat_f_29", "feat_f_30", "id", "target" ]
{}
null
{}
false
null

AutoTrain Dataset for project: tpsmay22

Dataset Descritpion

This dataset has been automatically processed by AutoTrain for project tpsmay22.

Languages

The BCP-47 code for the dataset's language is unk.

Dataset Structure

Data Instances

A sample from this dataset looks as follows:

[
  {
    "id": 828849,
    "feat_f_00": 0.5376503535622164,
    "feat_f_01": 1.943782180890636,
    "feat_f_02": 0.9135609975277558,
    "feat_f_03": 1.8069627709531364,
    "feat_f_04": 0.2608497764144719,
    "feat_f_05": 0.2210137962869367,
    "feat_f_06": -0.2041958755583295,
    "feat_f_07": 1,
    "feat_f_08": 3,
    "feat_f_09": 1,
    "feat_f_10": 3,
    "feat_f_11": 7,
    "feat_f_12": 1,
    "feat_f_13": 1,
    "feat_f_14": 3,
    "feat_f_15": 3,
    "feat_f_16": 0,
    "feat_f_17": 3,
    "feat_f_18": 3,
    "feat_f_19": -2.224980946907772,
    "feat_f_20": -0.0497802292031301,
    "feat_f_21": -3.926047324073047,
    "feat_f_22": 3.518427812720448,
    "feat_f_23": -3.682602827653292,
    "feat_f_24": -0.391453171033426,
    "feat_f_25": 1.519591066386293,
    "feat_f_26": 1.689261040286172,
    "feat_f_27": "AEBCBAHLAC",
    "feat_f_28": 379.1152852815462,
    "feat_f_29": 0,
    "feat_f_30": 1,
    "target": 0.0
  },
  {
    "id": 481680,
    "feat_f_00": 0.067304409313422,
    "feat_f_01": -2.1380257328497443,
    "feat_f_02": -1.071190705030414,
    "feat_f_03": -0.632098414262756,
    "feat_f_04": -0.6884213952425722,
    "feat_f_05": 0.9001794148519768,
    "feat_f_06": 1.0522875373816212,
    "feat_f_07": 2,
    "feat_f_08": 2,
    "feat_f_09": 2,
    "feat_f_10": 2,
    "feat_f_11": 3,
    "feat_f_12": 4,
    "feat_f_13": 4,
    "feat_f_14": 1,
    "feat_f_15": 3,
    "feat_f_16": 1,
    "feat_f_17": 2,
    "feat_f_18": 4,
    "feat_f_19": -0.1749962904609809,
    "feat_f_20": -2.14813633573821,
    "feat_f_21": -1.959294186862138,
    "feat_f_22": -0.0458843535688706,
    "feat_f_23": 0.7256376584744342,
    "feat_f_24": -2.5463878383279823,
    "feat_f_25": 2.3352097148227915,
    "feat_f_26": 0.4798465276880099,
    "feat_f_27": "BCBBDBFLCA",
    "feat_f_28": -336.9163876318925,
    "feat_f_29": 1,
    "feat_f_30": 0,
    "target": 0.0
  }
]

Dataset Fields

The dataset has the following fields (also called "features"):

{
  "id": "Value(dtype='int64', id=None)",
  "feat_f_00": "Value(dtype='float64', id=None)",
  "feat_f_01": "Value(dtype='float64', id=None)",
  "feat_f_02": "Value(dtype='float64', id=None)",
  "feat_f_03": "Value(dtype='float64', id=None)",
  "feat_f_04": "Value(dtype='float64', id=None)",
  "feat_f_05": "Value(dtype='float64', id=None)",
  "feat_f_06": "Value(dtype='float64', id=None)",
  "feat_f_07": "Value(dtype='int64', id=None)",
  "feat_f_08": "Value(dtype='int64', id=None)",
  "feat_f_09": "Value(dtype='int64', id=None)",
  "feat_f_10": "Value(dtype='int64', id=None)",
  "feat_f_11": "Value(dtype='int64', id=None)",
  "feat_f_12": "Value(dtype='int64', id=None)",
  "feat_f_13": "Value(dtype='int64', id=None)",
  "feat_f_14": "Value(dtype='int64', id=None)",
  "feat_f_15": "Value(dtype='int64', id=None)",
  "feat_f_16": "Value(dtype='int64', id=None)",
  "feat_f_17": "Value(dtype='int64', id=None)",
  "feat_f_18": "Value(dtype='int64', id=None)",
  "feat_f_19": "Value(dtype='float64', id=None)",
  "feat_f_20": "Value(dtype='float64', id=None)",
  "feat_f_21": "Value(dtype='float64', id=None)",
  "feat_f_22": "Value(dtype='float64', id=None)",
  "feat_f_23": "Value(dtype='float64', id=None)",
  "feat_f_24": "Value(dtype='float64', id=None)",
  "feat_f_25": "Value(dtype='float64', id=None)",
  "feat_f_26": "Value(dtype='float64', id=None)",
  "feat_f_27": "Value(dtype='string', id=None)",
  "feat_f_28": "Value(dtype='float64', id=None)",
  "feat_f_29": "Value(dtype='int64', id=None)",
  "feat_f_30": "Value(dtype='int64', id=None)",
  "target": "Value(dtype='float32', id=None)"
}

Dataset Splits

This dataset is split into a train and validation split. The split sizes are as follow:

Split name Num samples
train 719999
valid 180001
Downloads last month
3