Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ServerDisconnectedError
Message:      Server disconnected
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/", line 337, in get_first_rows_response
                  rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
                File "/src/services/worker/src/worker/", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 718, in __iter__
                  for key, example in self._iter():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 708, in _iter
                  yield from ex_iterable
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 112, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/spanish_billion_words/8ba50a854d61199f7d36b4c3f598589a2f8b493a2644b88ce80adb2cebcbc107/", line 85, in _generate_examples
                  files = os.listdir(directory)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 67, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 489, in xlistdir
                  fs, *_ = fsspec.get_fs_token_paths(path, storage_options=storage_options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 606, in get_fs_token_paths
                  fs = filesystem(protocol, **inkwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 268, in filesystem
                  return cls(**storage_options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 76, in __call__
                  obj = super().__call__(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 91, in __init__
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 96, in _index
                  for ti in self.tar:
                File "/usr/local/lib/python3.9/", line 2443, in __iter__
                  tarinfo =
                File "/usr/local/lib/python3.9/", line 2319, in next
         - 1)
                File "/usr/local/lib/python3.9/", line 275, in seek
                  return, whence)
                File "/usr/local/lib/python3.9/", line 143, in seek
                  data =, offset))
                File "/usr/local/lib/python3.9/", line 97, in read
                  rawblock =
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 574, in read
                  return super().read(length)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 1575, in read
                  out = self.cache._fetch(self.loc, self.loc + length)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 394, in _fetch
                  new = self.fetcher(self.end, bend)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 111, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 96, in sync
                  raise return_result
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/", line 53, in _runner
                  result[0] = await coro
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 608, in async_fetch_range
                  r = await self.session.get(self.url, headers=headers, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/", line 559, in _request
                  await resp.start(conn)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/", line 898, in start
                  message, payload = await  # type: ignore[union-attr]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/", line 616, in read
                  await self._waiter
              aiohttp.client_exceptions.ServerDisconnectedError: Server disconnected

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for Spanish Billion Words

Dataset Summary

The Spanish Billion Words Corpus is an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web. This resources include the spanish portions of SenSem, the Ancora Corpus, some OPUS Project Corpora and the Europarl, the Tibidabo Treebank, the IULA Spanish LSP Treebank, and dumps from the Spanish Wikipedia, Wikisource and Wikibooks.

This corpus is a compilation of 100 text files. Each line of these files represents one of the 50 million sentences from the corpus.

Supported Tasks and Leaderboards

This dataset can be used for language modelling and for pretraining language models.


The text in this dataset is in Spanish, BCP-47 code: 'es'.

Dataset Structure

Data Instances

Each example in this dataset is a sentence in Spanish:

{'text': 'Yo me coloqué en un asiento próximo a una ventana cogí un libro de una mesa y empecé a leer'}

Data Fields

  • text: a sentence in Spanish

Data Splits

The dataset is not split.

Dataset Creation

Curation Rationale

The Spanish Billion Words Corpus was created to train word embeddings using the word2vect algorithm provided by the gensim package.

Source Data

Initial Data Collection and Normalization

The corpus was created compiling the following resources:

All the annotated corpora (like Ancora, SenSem and Tibidabo) were untagged and the parallel corpora (most coming from the OPUS Project) was preprocessed to obtain only the Spanish portions of it.

Once the whole corpus was unannotated, all non-alphanumeric characters were replaced with whitespaces, all numbers with the token “DIGITO” and all the multiple whitespaces with only one whitespace.

The capitalization of the words remained unchanged.

Who are the source language producers?

The data was compiled and processed by Cristian Cardellino.


The dataset is unannotated.

Annotation process


Who are the annotators?


Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

The data was collected and processed by Cristian Cardellino.

Licensing Information

The dataset is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license (CC BY-SA 4.0)

Citation Information

     author = {Cardellino, Cristian},
     title = {Spanish {B}illion {W}ords {C}orpus and {E}mbeddings},
     url = {},
     month = {August},
     year = {2019}


Thanks to @mariagrandury for adding this dataset.