Datasets:

ArXiv:
Tags:

The doc_id in duplicates is should contain?

#24
by newbietuan - opened

Hi, thanks for your great work!
For the duplicates, i have some questions.

The doc_id in en_head.duplicates.parquet is the id of the doc should contain?And is this the result based on en_head.minhash.parquet? Another question is which the minhash_signature is used,0.7,0.8,0.9 or 1.0?

Together org

Hi @newbietuan , thanks for your question!

Each row in the duplicates files corresponds to a document with document id doc_id that is a duplicate and can be dropped if you wish to deduplicate the dataset. For each cluster of duplicates, there is exactly one sample which is not in the duplicates files and is hence kept in the data.

These duplicates are not based on minhash, but rather on exactly deduplicating the web documents (i.e. the content of the .wet files from common crawl which consitute the input to ccnet). We used a bloomfilter to efficiently deduplicate the data.

Hi, @mauriceweber Thanks for your reply!
I still have a question to ask~
if i want to use the minhash_signature for deduplication, such as using minhash_signature0.9, should I calculate the Jaccard similarity through the minhash_signature of the two docs, and then set a threshold to determine whether the two docs is duplicated?
If so, do you have any recommended settings?

Together org

The Minhash signatures are intended to be further processed by a locality-sensitive hashing (LSH) step which generates the duplicate clusters -- based on these clusters you can do a further refinement step to eliminate false positives by calculating the jaccard similarities only for samples in the clusters (that way you don't have to compute the Jaccard similarity for every possible pair in the dataset).

You can use this script provided in the official Github repo to run the LSH step.

Sign up or log in to comment