Datasets:

Languages:
English
Size Categories:
n>1T
ArXiv:
License:

Duplicates in Dolma 1.7

#32
by vedaad - opened

Hi,

There are many exact duplicates in the Dolma dataset, both by text and ID. Given your extensive de-duplication pipeline, is this a bug in the dataset?

For an example, see the following

import pandas as pd

# Downloaded from Huggingface and extracted using `gzip -d books-0000.json.gz`
books0 = pd.read_json("books-0000.json", lines=True)

len(books0)
>>> 24358

len(books0.text.drop_duplicates(keep=False))
>>> 24246

books0.id.value_counts()
>>> id
>>> 4b4f937a0dc9156268adddeb1cff76885b56061f    31
>>> 2013973b56b26eeb371da72efedf22f6921235ce    24
>>> c8e625877f4a5ab7378c2713c9c720dc4eb2c8d7    21
>>> 8ea47781d1f36e5773e5f8b78576723052ca8dcc    11
>>> e927dbe32bf2ccfa7a141fcf4c3ce145d7f73918     8
>>>                                             ..
>>> 60c6a3c24856cf0bb46efcab3c1c9e42938a7acb     1
>>> 4dc420f0a1c88daa9fc00e128df9f32dcdc46531     1
>>> 48fd9dd3ef711f1ec64de9fdb15cc7356750592e     1
>>> 0e1ab4bd013506ce8d5d209cf2c749ca3d9d714a     1
>>> d19bb0e85b7c5dc7a6764ac8ec4ab71f38dd4a16     1
>>> Name: count, Length: 24256, dtype: int64

There are also duplicate IDs in other splits, like StarCoder, but the duplicate IDs correlate to different text. Can you please provide some clarification around why the duplicates were kept in?

Sign up or log in to comment