The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ArrowInvalid Message: JSON parse error: Column() changed from object to array in row 0 Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables df = pandas_read_json(f) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json return pd.read_json(path_or_buf, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 815, in read_json return json_reader.read() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1025, in read obj = self._get_object_parser(self.data) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1051, in _get_object_parser obj = FrameParser(json, **kwargs).parse() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1187, in parse self._parse() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1403, in _parse ujson_loads(json, precise_float=self.precise_float), dtype=None ValueError: Trailing data During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 233, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2831, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1845, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2012, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1507, in __iter__ for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 268, in __iter__ for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 163, in _generate_tables raise e File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 137, in _generate_tables pa_table = paj.read_json( File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for Book_Stitch
This dataset contains books from Project Gutenberg, tokenized into 1020-token chunks with markers that indicate the section and book unique identifier (UID). These markers serve as both prefix and suffix for the sections, ensuring that the sequential nature of each book is preserved and facilitating later text reconstruction. The book_stitch dataset is designed for training AI models to handle long texts in sections, retaining context for tasks like summarization, text stitching, and document analysis.
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
The book_stitch dataset is part of a series designed to teach AI models how to handle large documents, including stitching and unstitching sections of text. Each book is tokenized into fixed-size chunks of 1020 tokens, with markers appended to indicate the beginning and end of each section. This dataset works in conjunction with the context_stitch and train_stitch datasets to allow models to maintain long-range context across different sections of a document, enabling comprehensive text analysis and reassembly.
Curated by: [Robert McNarland, McNarland Software Consultation Inc.] Funded by None Shared by None Language(s) (NLP): [ English (books from Project Gutenberg)] License: [MIT] Dataset Sources [optional] Repository: [R3troR0b/book_stitch] Paper [optional]: [More Information Needed] Demo [optional]: [More Information Needed]
Uses
Direct Use
Document Classification and Reconstruction: The book_stitch dataset is key for teaching models to classify books and reconstruct them from tokenized chunks. The stitched-together sections allow for complex tasks like document-level classification and content retrieval.
Text Stitching and Unstitching: Models can learn how to stitch and unstitch text segments, with markers indicating where sections begin and end, supporting tasks like reassembling fragmented documents or summarizing long-form text.
Long-Document Modeling: The dataset helps train models to process long texts efficiently, maintaining contextual understanding across multiple sections by using section markers and UIDs.
Contextual Inference: By identifying relationships between text sections, models can better infer meaning and connections in lengthy documents, supporting tasks such as question-answering, summarization, and complex search.
Out-of-Scope Use
The dataset is not intended for use cases unrelated to document classification, text reassembly, or handling long-range context in texts. It may not be applicable to non-English texts.
Dataset Structure
Each entry in the dataset consists of:
Label: This includes both the prefix and suffix stitch markers indicating the section number and the book's UID (e.g., [/SEC:1;B-5]). Text: A 1020-token chunk of the book's content. Example:
{
"label": "[/SEC:1;B-5] [/SEC:2;B-5]",
"text": "CHAPTER I: START OF THE PROJECT GUTENBERG EBOOK..."
}
Dataset Creation
Curation Rationale
The book_stitch dataset was created to help AI models understand the structure of long-form text. By breaking books into consistent tokenized chunks with markers, models can be trained to stitch and unstitch sections, allowing for sophisticated text handling.
Source Data
Data Collection and Processing
The dataset was generated using a custom-built tokenizer to split Project Gutenberg books into fixed 1020-token chunks, adding section and book UIDs as markers. The end portion of each book may be smaller than previous sections. The table of contents and appendices are treated as part of the text.
Who are the source data producers?
Titles in the book_stitch dataset are from Project Gutenberg's English collection.
Annotations [optional]
Annotation process
This dataset does not include any additional annotations beyond the section markers and UIDs.
Who are the annotators?
There are no annotators for this dataset, as it relies solely on automated tokenization and marking.
Personal and Sensitive Information
This dataset does not contain personal, sensitive, or private information. All books are sourced from Project Gutenberg, a collection of public domain works.
Bias, Risks, and Limitations
Recommendations
Users should be aware that:
The dataset is limited to English books from Project Gutenberg and may not generalize well to non-English or non-literary domains. Since these books are public domain, the texts may reflect biases inherent in historical works.
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
- Book UID (BUID): A unique identifier assigned to each book.
- Stitch Markers: Markers appended to each section of the text to indicate the section number and the book's UID (e.g., [/SEC:1;B-5]).
- Contextual Stitching: The process of stitching together sections of text while maintaining continuity.
More Information [optional]
[More Information Needed]
Dataset Card Authors [optional]
[Robert McNarland, McNarland Software Consultation Inc.]
Dataset Card Contact
- Downloads last month
- 58