The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError Exception: ArrowNotImplementedError Message: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1869, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 578, in write_table self._build_writer(inferred_schema=pa_table.schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 399, in _build_writer self.pa_writer = self._WRITER_CLASS(self.stream, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__ self.writer = _parquet.ParquetWriter( File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__ File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1885, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 597, in finalize self._build_writer(self.schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 399, in _build_writer self.pa_writer = self._WRITER_CLASS(self.stream, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__ self.writer = _parquet.ParquetWriter( File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__ File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1392, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1041, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 999, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1740, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1896, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
_data_files
list | _fingerprint
string | _format_columns
null | _format_kwargs
dict | _format_type
null | _output_all_columns
bool | _split
null |
---|---|---|---|---|---|---|
[
{
"filename": "data-00000-of-00001.arrow"
}
] | be89fb100307bd07 | null | {} | null | false | null |
Swahili Text Dataset
Overview
This dataset contains a comprehensive collection of Swahili text data, derived from the AfriBERTa Corpus. It provides a rich resource for natural language processing tasks focused on the Swahili language.
Dataset Details
- Source: AfriBERTa Corpus (Swahili subset)
- Language: Swahili
- Size: [Insert total number of samples here]
- Format: Hugging Face Dataset
Content
The dataset consists of two main columns:
id
: A unique identifier for each text entrytext
: The Swahili text content
Usage
You can load this dataset using the Hugging Face datasets
library:
from datasets import load_dataset
dataset = load_dataset("[Your_HuggingFace_Username]/[Your_Dataset_Name]")
Replace [Your_HuggingFace_Username]
and [Your_Dataset_Name]
with the appropriate values for your uploaded dataset.
Data Fields
id
: stringtext
: string
Data Splits
This dataset combines training and test splits from the original AfriBERTa Corpus. The data has been shuffled with a fixed seed (42) to ensure reproducibility.
Dataset Creation
This dataset was created by:
- Loading the Swahili subset of the AfriBERTa Corpus
- Concatenating the training and test splits
- Shuffling the combined dataset
- Extracting the 'id' and 'text' fields
Intended Uses
This dataset can be used for various natural language processing tasks involving the Swahili language, such as:
- Language modeling
- Text classification
- Named entity recognition
- Machine translation (as a source or target language)
- Sentiment analysis
- And more...
Limitations
- The dataset is limited to the content available in the original AfriBERTa Corpus.
- It may not represent all dialects or variations of the Swahili language.
- The quality and accuracy of the text content depend on the original data source.
Citation
If you use this dataset, please cite the original AfriBERTa Corpus:
@inproceedings{ogueji-etal-2021-small,
title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages",
author = "Ogueji, Kelechi and
Zhu, Yuxin and
Lin, Jimmy",
booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.mrl-1.11",
pages = "116--126",
}
Licensing Information
This dataset is derived from the AfriBERTa Corpus. For usage terms and conditions, please refer to the original dataset's license.
Contact
If you have questions or comments about this specific version of the dataset, please open an issue in this repository or contact [ronleon76@gmail.com].
Dataset created and curated by [AdeptSchneider]. Last updated: [09/10/2024]
- Downloads last month
- 32