The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    NotImplementedError
Message:      Extraction protocol for TAR archives like 'http://cs.famaf.unc.edu.ar/~ccardellino/SBWCE/clean_corpus.tar.bz2' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.

Example usage:

	url = dl_manager.download(url)
	tar_archive_iterator = dl_manager.iter_archive(url)

	for filename, file in tar_archive_iterator:
		...
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 322, in compute
                  compute_first_rows_from_parquet_response(
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response
                  rows_index = indexer.get_rows_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 444, in get_rows_index
                  return RowsIndex(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 347, in __init__
                  self.parquet_index = self._init_parquet_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 364, in _init_parquet_index
                  response = get_previous_step_or_raise(
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 566, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 126, in get_rows_or_raise
                  return get_rows(
                File "/src/services/worker/src/worker/utils.py", line 64, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 87, in get_rows
                  ds = load_dataset(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 2567, in load_dataset
                  return builder_instance.as_streaming_dataset(split=split)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1382, in as_streaming_dataset
                  splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
                File "/tmp/modules-cache/datasets_modules/datasets/spanish_billion_words/8ba50a854d61199f7d36b4c3f598589a2f8b493a2644b88ce80adb2cebcbc107/spanish_billion_words.py", line 76, in _split_generators
                  data_dir = dl_manager.download_and_extract(_URL)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1089, in download_and_extract
                  return self.extract(self.download(url_or_urls))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1041, in extract
                  urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 459, in map_nested
                  return function(data_struct)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1051, in _extract
                  raise NotImplementedError(
              NotImplementedError: Extraction protocol for TAR archives like 'http://cs.famaf.unc.edu.ar/~ccardellino/SBWCE/clean_corpus.tar.bz2' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
              
              Example usage:
              
              	url = dl_manager.download(url)
              	tar_archive_iterator = dl_manager.iter_archive(url)
              
              	for filename, file in tar_archive_iterator:
              		...

Need help to make the dataset viewer work? Open a discussion for direct support.

Dataset Card for Spanish Billion Words

Dataset Summary

The Spanish Billion Words Corpus is an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web. This resources include the spanish portions of SenSem, the Ancora Corpus, some OPUS Project Corpora and the Europarl, the Tibidabo Treebank, the IULA Spanish LSP Treebank, and dumps from the Spanish Wikipedia, Wikisource and Wikibooks.

This corpus is a compilation of 100 text files. Each line of these files represents one of the 50 million sentences from the corpus.

Supported Tasks and Leaderboards

This dataset can be used for language modelling and for pretraining language models.

Languages

The text in this dataset is in Spanish, BCP-47 code: 'es'.

Dataset Structure

Data Instances

Each example in this dataset is a sentence in Spanish:

{'text': 'Yo me coloqué en un asiento próximo a una ventana cogí un libro de una mesa y empecé a leer'}

Data Fields

  • text: a sentence in Spanish

Data Splits

The dataset is not split.

Dataset Creation

Curation Rationale

The Spanish Billion Words Corpus was created to train word embeddings using the word2vect algorithm provided by the gensim package.

Source Data

Initial Data Collection and Normalization

The corpus was created compiling the following resources:

All the annotated corpora (like Ancora, SenSem and Tibidabo) were untagged and the parallel corpora (most coming from the OPUS Project) was preprocessed to obtain only the Spanish portions of it.

Once the whole corpus was unannotated, all non-alphanumeric characters were replaced with whitespaces, all numbers with the token “DIGITO” and all the multiple whitespaces with only one whitespace.

The capitalization of the words remained unchanged.

Who are the source language producers?

The data was compiled and processed by Cristian Cardellino.

Annotations

The dataset is unannotated.

Annotation process

[N/A]

Who are the annotators?

[N/A]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

The data was collected and processed by Cristian Cardellino.

Licensing Information

The dataset is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license (CC BY-SA 4.0)

Citation Information

@misc{cardellinoSBWCE,
     author = {Cardellino, Cristian},
     title = {Spanish {B}illion {W}ords {C}orpus and {E}mbeddings},
     url = {https://crscardellino.github.io/SBWCE/},
     month = {August},
     year = {2019}
}

Contributions

Thanks to @mariagrandury for adding this dataset.

Downloads last month
262

Models trained or fine-tuned on spanish_billion_words