Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    FileNotFoundError
Message:      [Errno 2] No such file or directory: 'https://huggingface.co/datasets/imthanhlv/binhvq_dedup/resolve/main/train.jsonl.zst'
Traceback:    Traceback (most recent call last):
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 485, in compute_first_rows_response
                  rows = get_rows(
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 120, in decorator
                  return func(*args, **kwargs)
                File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 176, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 917, in __iter__
                  for key, example in ex_iterable:
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/imthanhlv--binhvq_dedup/60d03216e35c1b592ca70ae909d1c1f504292e38e9622d04108a14fe9c34868e/binhvq_dedup.py", line 78, in _generate_examples
                  for doc in reader.read_jsonl_zst(filepath):
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/lm_dataformat/__init__.py", line 245, in read_jsonl_zst
                  with open(file, 'rb') as fh:
              FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/imthanhlv/binhvq_dedup/resolve/main/train.jsonl.zst'

Need help to make the dataset viewer work? Open an discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

BinhVQ dedup

Important: Please install lm_dataformat by pip install lm_dataformat before using this dataset

How to use

import datasets

dataset = datasets.load_dataset("imthanhlv/binhvq_dedup")

Dataset information

This dataset was created from https://github.com/binhvq/news-corpus dump with date 21/05/2021. I applied some simple preprocessing:

  • Using BeautifulSoup to clean content
  • Each record is concatenate of (title + "\n" + sapo + "\n" + content)
  • Then perform shuffling + split train & validation + deduplicate (exact match using sha256)
Downloads last month
90