Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ServerDisconnectedError
Message:      Server disconnected
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
                  rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
                File "/src/services/worker/src/worker/utils.py", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__
                  for key, example in self._iter():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter
                  yield from ex_iterable
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/spanish_billion_words/8ba50a854d61199f7d36b4c3f598589a2f8b493a2644b88ce80adb2cebcbc107/spanish_billion_words.py", line 85, in _generate_examples
                  files = os.listdir(directory)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 67, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 489, in xlistdir
                  fs, *_ = fsspec.get_fs_token_paths(path, storage_options=storage_options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 606, in get_fs_token_paths
                  fs = filesystem(protocol, **inkwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 268, in filesystem
                  return cls(**storage_options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 76, in __call__
                  obj = super().__call__(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/tar.py", line 91, in __init__
                  self._index()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/tar.py", line 96, in _index
                  for ti in self.tar:
                File "/usr/local/lib/python3.9/tarfile.py", line 2443, in __iter__
                  tarinfo = self.next()
                File "/usr/local/lib/python3.9/tarfile.py", line 2319, in next
                  self.fileobj.seek(self.offset - 1)
                File "/usr/local/lib/python3.9/bz2.py", line 275, in seek
                  return self._buffer.seek(offset, whence)
                File "/usr/local/lib/python3.9/_compression.py", line 143, in seek
                  data = self.read(min(io.DEFAULT_BUFFER_SIZE, offset))
                File "/usr/local/lib/python3.9/_compression.py", line 97, in read
                  rawblock = self._fp.read(BUFFER_SIZE)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 574, in read
                  return super().read(length)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1575, in read
                  out = self.cache._fetch(self.loc, self.loc + length)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/caching.py", line 394, in _fetch
                  new = self.fetcher(self.end, bend)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 111, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 96, in sync
                  raise return_result
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 53, in _runner
                  result[0] = await coro
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 608, in async_fetch_range
                  r = await self.session.get(self.url, headers=headers, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/client.py", line 559, in _request
                  await resp.start(conn)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 898, in start
                  message, payload = await protocol.read()  # type: ignore[union-attr]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/streams.py", line 616, in read
                  await self._waiter
              aiohttp.client_exceptions.ServerDisconnectedError: Server disconnected

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for Spanish Billion Words

Dataset Summary

The Spanish Billion Words Corpus is an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web. This resources include the spanish portions of SenSem, the Ancora Corpus, some OPUS Project Corpora and the Europarl, the Tibidabo Treebank, the IULA Spanish LSP Treebank, and dumps from the Spanish Wikipedia, Wikisource and Wikibooks.

This corpus is a compilation of 100 text files. Each line of these files represents one of the 50 million sentences from the corpus.

Supported Tasks and Leaderboards

This dataset can be used for language modelling and for pretraining language models.

Languages

The text in this dataset is in Spanish, BCP-47 code: 'es'.

Dataset Structure

Data Instances

Each example in this dataset is a sentence in Spanish:

{'text': 'Yo me coloqué en un asiento próximo a una ventana cogí un libro de una mesa y empecé a leer'}

Data Fields

  • text: a sentence in Spanish

Data Splits

The dataset is not split.

Dataset Creation

Curation Rationale

The Spanish Billion Words Corpus was created to train word embeddings using the word2vect algorithm provided by the gensim package.

Source Data

Initial Data Collection and Normalization

The corpus was created compiling the following resources:

All the annotated corpora (like Ancora, SenSem and Tibidabo) were untagged and the parallel corpora (most coming from the OPUS Project) was preprocessed to obtain only the Spanish portions of it.

Once the whole corpus was unannotated, all non-alphanumeric characters were replaced with whitespaces, all numbers with the token “DIGITO” and all the multiple whitespaces with only one whitespace.

The capitalization of the words remained unchanged.

Who are the source language producers?

The data was compiled and processed by Cristian Cardellino.

Annotations

The dataset is unannotated.

Annotation process

[N/A]

Who are the annotators?

[N/A]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

The data was collected and processed by Cristian Cardellino.

Licensing Information

The dataset is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license (CC BY-SA 4.0)

Citation Information

@misc{cardellinoSBWCE,
     author = {Cardellino, Cristian},
     title = {Spanish {B}illion {W}ords {C}orpus and {E}mbeddings},
     url = {https://crscardellino.github.io/SBWCE/},
     month = {August},
     year = {2019}
}

Contributions

Thanks to @mariagrandury for adding this dataset.