The dataset viewer is not available for this split.
The info cannot be fetched for the config 'default' of the dataset.
Error code:   InfoError
Exception:    HfHubHTTPError
Message:      500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/brwac/brwac.py (Request ID: Root=1-65dfa52d-2ea7f1da7d3f99ff34978e89;74f10462-fb3c-4229-81ea-14b63b411a42)

Internal Error - We're working hard to fix this as soon as possible!
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 210, in compute_first_rows_from_streaming_response
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 477, in get_dataset_config_info
                  builder = load_dataset_builder(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 2220, in load_dataset_builder
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1871, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1822, in dataset_module_factory
                  with fs.open(f"datasets/{path}/{filename}", "r", encoding="utf-8") as f:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1295, in open
                  self.open(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1307, in open
                  f = self._open(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 223, in _open
                  return HfFileSystemFile(self, path, mode=mode, revision=revision, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 596, in __init__
                  super().__init__(fs, path, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1663, in __init__
                  self.size = self.details["size"]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1676, in details
                  self._details = self.fs.info(self.path)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 480, in info
                  resolved_path = self.resolve_path(path, revision=revision)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 179, in resolve_path
                  repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 116, in _repo_and_revision_exist
                  self._api.repo_info(repo_id, revision=revision, repo_type=repo_type)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2275, in repo_info
                  return method(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2148, in dataset_info
                  hf_raise_for_status(r)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status
                  raise HfHubHTTPError(str(e), response=response) from e
              huggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/brwac/brwac.py (Request ID: Root=1-65dfa52d-2ea7f1da7d3f99ff34978e89;74f10462-fb3c-4229-81ea-14b63b411a42)
              
              Internal Error - We're working hard to fix this as soon as possible!

Need help to make the dataset viewer work? Open a discussion for direct support.

Dataset Card for BrWaC

Dataset Summary

The BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework, which was made public for research purposes. The current corpus version, released in January 2017, is composed by 3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available solely for academic research purposes, and you agreed not to use it for any commercial applications. Manually download at https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC

Supported Tasks and Leaderboards

[More Information Needed]

Languages

Portuguese

Dataset Structure

Data Instances

An example from the BrWaC dataset looks as follows:

{
  "doc_id": "netg-1afc73",
  "text": {
    "paragraphs": [
      [
        "Conteúdo recente"
      ],
      [
        "ESPUMA MARROM CHAMADA \"NINGUÉM MERECE\""
      ],
      [
        "31 de Agosto de 2015, 7:07 , por paulo soavinski - | No one following this article yet."
      ],
      [
        "Visualizado 202 vezes"
      ],
      [
        "JORNAL ELETRÔNICO DA ILHA DO MEL"
      ],
      [
        "Uma espuma marrom escuro tem aparecido com frequência na Praia de Fora.",
        "Na faixa de areia ela aparece disseminada e não chama muito a atenção.",
        "No Buraco do Aipo, com muitas pedras, ela aparece concentrada.",
        "É fácil saber que esta espuma estranha está lá, quando venta.",
        "Pequenos algodões de espuma começam a flutuar no espaço, pertinho da Praia do Saquinho.",
        "Quem pode ajudar na coleta deste material, envio a laboratório renomado e pagamento de análises, favor entrar em contato com o site."
      ]
    ]
  },
  "title": "ESPUMA MARROM CHAMADA ‟NINGUÉM MERECE‟ - paulo soavinski",
  "uri": "http://blogoosfero.cc/ilhadomel/pousadasilhadomel.com.br/espuma-marrom-chamada-ninguem-merece"
}

Data Fields

  • doc_id: The document ID
  • title: The document title
  • uri: URI where the document was extracted from
  • text: A list of document paragraphs (with a list of sentences in it as a list of strings)

Data Splits

The data is only split into train set with size of 3530796 samples.

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

@inproceedings{wagner2018brwac,
  title={The brwac corpus: A new open resource for brazilian portuguese},
  author={Wagner Filho, Jorge A and Wilkens, Rodrigo and Idiart, Marco and Villavicencio, Aline},
  booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
  year={2018}
}

Contributions

Thanks to @jonatasgrosman for adding this dataset.

Downloads last month
238

Models trained or fine-tuned on brwac