Datasets:
Tasks:
Token Classification
Languages:
French
Dataset Preview
Full Screen Viewer
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError Exception: ArrowNotImplementedError Message: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table self._build_writer(inferred_schema=pa_table.schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer self.pa_writer = self._WRITER_CLASS(self.stream, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__ self.writer = _parquet.ParquetWriter( File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__ File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize self._build_writer(self.schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer self.pa_writer = self._WRITER_CLASS(self.stream, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__ self.writer = _parquet.ParquetWriter( File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__ File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
_data_files
list | _fingerprint
string | _format_columns
sequence | _format_kwargs
dict | _format_type
null | _indexes
dict | _output_all_columns
bool | _split
null |
---|---|---|---|---|---|---|---|
[
{
"filename": "dataset.arrow"
}
] | cda80c8a50d4e5d7 | [
"tags",
"tokens"
] | {} | null | {} | false | null |
AutoTrain Dataset for project: test
Dataset Description
This dataset has been automatically processed by AutoTrain for project test.
Languages
The BCP-47 code for the dataset's language is fr.
Dataset Structure
Data Instances
A sample from this dataset looks as follows:
[
{
"tokens": [
"CCI",
"CCI",
"CCI",
"CCI bifocal G3, 7 et 25 mm",
"CCI bifocal G3, 7 et 25 mm",
"CCI",
"18/04/2019 : mammectomie dt + CA",
"18/04/2019 : mammectomie dt + CA",
"RO+ 20%",
" RO+ 20%",
"RO+",
"RO+",
"18/04/2019 : mammectomie dt + CA",
"18/04/2019 : mammectomie dt + CA",
"RP-",
"RP-",
"18/04/2019 : mammectomie dt + CA",
"18/04/2019 : mammectomie dt + CA",
"HER2 2+",
"HER2 2+",
"HER2 +",
"HER2 +",
"18/04/2019 : mammectomie dt + CA",
"18/04/2019 : mammectomie dt + CA",
"Fish+",
"Fish+",
"18/04/2019 : mammectomie dt + CA",
"18/04/2019 : mammectomie dt + CA",
"N+ 17/19",
"N+ 17/19",
"18/04/2019 : mammectomie dt + CA",
"18/04/2019 : mammectomie dt + CA",
"CA15-3 : 12 UI",
"CA15-3 : 12 UI",
"18/04/2019 : mammectomie dt + CA",
"18/04/2019 : mammectomie dt + CA",
"PS-0",
"PS-0",
"PS-0",
"PS-0",
" 03/2020",
"08/2020",
" 03/2020",
"08/2020"
],
"tags": [
28,
28,
28,
37,
37,
28,
14,
14,
29,
29,
29,
29,
32,
32,
33,
33,
34,
34,
19,
19,
19,
19,
20,
20,
17,
17,
18,
18,
23,
23,
24,
24,
6,
6,
7,
7,
27,
27,
27,
27,
12,
12,
12,
12
]
},
{
"tokens": [
"K sein D",
"1992 : K sein D",
"CA15-3 =1890",
"CA 15-3 : 5200",
"10/18",
"11/21",
"PS-2",
"10/18"
],
"tags": [
28,
14,
6,
6,
7,
7,
27,
12
]
}
]
Dataset Fields
The dataset has the following fields (also called "features"):
{
"tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"tags": "Sequence(feature=ClassLabel(names=['ALK', 'ALK_DATE', 'BRAF', 'BRAF_DATE', 'BRCA', 'BRCA_DATE', 'CA15-3', 'CA15-3_DATE', 'CK20', 'CK20_DATE', 'CK7', 'CK7_DATE', 'Date PS', 'Date arr\u00eat traitement', 'Date du diagnostic de la tumeur primitive', 'EGFR', 'EGFR_DATE', 'FISH', 'FISH_DATE', 'HER2', 'HER2_DATE', 'KI67', 'KI67_DATE', 'N+', 'N+_DATE', 'PDL1', 'PDL1_DATE', 'PS', 'Premier type histologique de cancer', 'RO', 'ROS', 'ROS_DATE', 'RO_DATE', 'RP', 'RP_DATE', 'TTF1', 'TTF1_DATE', 'Taille de la tumeur primitive au diagnostic', 'motif arr\u00eat traitement', 'r\u00e9cepteurs hormonaux', 'r\u00e9cepteurs_hormonaux_DATE'], id=None), length=-1, id=None)"
}
Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
Split name | Num samples |
---|---|
train | 999 |
valid | 508 |
- Downloads last month
- 31