The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 8 new columns ({'title', 'published_date', 'link', 'abstract', 'categories', 'authors', 'unix_timestamp', 'id'}) and 2 missing columns ({'name', 'data'}). This happened while the json dataset builder was generating data using hf://datasets/ethannlin/SchNovel/vector-db-data/cs_rag_data.jsonl (at revision ffae5a0b92f91580db115de2f803c33162b04d5f) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1869, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 580, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast id: string title: string authors: string categories: string abstract: string published_date: string unix_timestamp: double link: string to {'name': Value(dtype='string', id=None), 'data': {'10': [{'paper1': {'abstract': Value(dtype='string', id=None), 'authors': Value(dtype='string', id=None), 'categories': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'link': Value(dtype='string', id=None), 'published_date': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'unix_timestamp': Value(dtype='float64', id=None)}, 'paper2': {'abstract': Value(dtype='string', id=None), 'authors': Value(dtype='string', id=None), 'categories': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'link': Value(dtype='string', id=None), 'published_date': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'unix_timestamp': Value(dtype='float64', id=None)}}], '2': [{'paper1': {'abstract': Value(dtype='string', id=None), 'authors': Value(dtype='string', id=None), 'categories': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'link': Value(dtype='string', id=None), 'published_date': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'unix_timestamp': Value(dtype='float64', id=None)}, 'paper2': {'abstract': Value(dtype='string', id=None), 'authors': Value(dtype='string', id=None), 'categories': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'link': Value(dtype='string', id=None), 'published_date': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'unix_timestamp': Value(d ... loat64', id=None)}}], '6': [{'paper1': {'abstract': Value(dtype='string', id=None), 'authors': Value(dtype='string', id=None), 'categories': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'link': Value(dtype='string', id=None), 'published_date': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'unix_timestamp': Value(dtype='float64', id=None)}, 'paper2': {'abstract': Value(dtype='string', id=None), 'authors': Value(dtype='string', id=None), 'categories': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'link': Value(dtype='string', id=None), 'published_date': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'unix_timestamp': Value(dtype='float64', id=None)}}], '8': [{'paper1': {'abstract': Value(dtype='string', id=None), 'authors': Value(dtype='string', id=None), 'categories': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'link': Value(dtype='string', id=None), 'published_date': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'unix_timestamp': Value(dtype='float64', id=None)}, 'paper2': {'abstract': Value(dtype='string', id=None), 'authors': Value(dtype='string', id=None), 'categories': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'link': Value(dtype='string', id=None), 'published_date': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'unix_timestamp': Value(dtype='float64', id=None)}}]}} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1392, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1041, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 999, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1740, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 8 new columns ({'title', 'published_date', 'link', 'abstract', 'categories', 'authors', 'unix_timestamp', 'id'}) and 2 missing columns ({'name', 'data'}). This happened while the json dataset builder was generating data using hf://datasets/ethannlin/SchNovel/vector-db-data/cs_rag_data.jsonl (at revision ffae5a0b92f91580db115de2f803c33162b04d5f) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
name
string | data
dict |
---|---|
cs | {"10":[{"paper1":{"abstract":" Electronic word-of-mouth (eWOM) has become an important resource for(...TRUNCATED) |
cs | {"10":[{"paper1":{"abstract":" Conventional Knowledge Graph Completion (KGC) assumes that all test (...TRUNCATED) |
cs | {"10":[{"paper1":{"abstract":" A user who does not have a quantum computer but wants to perform qua(...TRUNCATED) |
cs | {"10":[{"paper1":{"abstract":" Sim-to-real transfer trains RL agents in the simulated environments (...TRUNCATED) |
cs | {"10":[{"paper1":{"abstract":" This article describes the preliminary qualitative results of a ther(...TRUNCATED) |
math | {"10":[{"paper1":{"abstract":" We give a formula for the class number of an arbitrary CM algebraic (...TRUNCATED) |
math | {"10":[{"paper1":{"abstract":" In Heintz-Schnorr (1982), the authors introduced the notion of corre(...TRUNCATED) |
math | {"10":[{"paper1":{"abstract":" We study the large-time asymptotic behavior of solutions toward the\(...TRUNCATED) |
math | {"10":[{"paper1":{"abstract":" We study the Klein-Gordon equation with general interaction term, wh(...TRUNCATED) |
math | {"10":[{"paper1":{"abstract":" We propose several Hodge theoretic analogues of the conjectures of H(...TRUNCATED) |
End of preview.
Subsets and Splits