Missing duplicates parquet files

by bebensee - opened

Thanks for sharing and making this huge dataset available for research!

Most of the downloads have been working smoothly but for several of the duplicates download URLs I'm seeing a 404 on my end. For example:

--2023-11-29 07:07:27--
Connecting to connected.
Proxy request sent, awaiting response... 404 Not Found
2023-11-29 07:07:27 ERROR 404: Not Found.

The following is a list of all URLs that have failed for me out of the 2022-33, 2022-40, 2022-49, 2023-06, 2023-14 snapshots:

I think these files might also be corrupt:

When trying to load these with the datasets library likeload_dataset("parquet", data_files=[fn1, fn2, ...]) I'm getting the following error which remains after re-download:

pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
Together org

Hi @bebensee thanks for pointing this out!

It is possible that for some shards there is no associated duplicate file, which happens if all documents in that shard are either unique, or they are the first appearing document from any cluster of documents.

Regarding the corrupted files -- I can load both of them using polars or pandas. For example, running

import io
import requests
import polars as pl

urls = [

for url in urls:
    response = requests.get(url)
    df = pl.read_parquet(io.BytesIO(response.content))
    print(url, len(df))

returns 3796 3857

So I think the files are valid parquet files and it might be an issue with either (1) an corrupted file download (so the magic bytes are missing), or (2) an issue with the hf dataloader.

Hey @mauriceweber to follow up on this on the topic of files that're missing duplicates. It makes for a poor user experience via huggingface when loading the dataset and having the process crash because it tries to download a file that doesn't exist. I looked through the download code inside download_manager.pyand it doesn't look like there's any space for exception handling/params to continue with the rest of the files when trying to download a list of urls. I didn't see anything that can be set as a download_config option either that can be passed to HF loader.

Looking at the duplicate code that was added earlier this week, I'm hoping to see if it's possible to add some checks around duplicate urls to verify they exist before downloading such as below

(inside _split_generators_sample inside

                dupe_url = os.path.join(
                        _URL_BASE, f"duplicates/{lst_id}.duplicates.parquet"
                dupe_response = requests.head(dupe_url, allow_redirects=True)

                if dupe_response.status_code == 404:


or try downloading the files individually rather than in bulk with a try except surrounding it. If i'm understanding correctly based on how huggingface's download manager works, downloading the rp2 dataset wouldn't miss out on the parallelization so moving from bulk downloads to single downloads wouldn't introduce latency.

That way the snapshot can continue to download without being hampered by a parquet file that doesn't exist.

Another potential solution that i haven't investigated the LOE on is importing the duplicates index file and checking to see if the listing is inside there - that mitigates the extra HTTP hop for head or try/catch request.

Together org

you're right, thanks for catching this! I missed that the download_manager doesn't handle those errors (which I guess makes sense) -- I reverted the dedupe flag for now since this needs more work. I think your final suggestion is probably the cleanest way so that we minimize the number of requests. I'd also use an index that contains all missing listings to keep it small. What do you think?

I think the final solution is the "best" in terms of trade offs of using the deduped files as a source of truth rather than guessing and checking, while mitigating extraneous requests. I think keeping an index is brilliant since this data won't change and is static over time. If I had to guess, there's less files missing, so it'd be a smaller index.

I'd imagine the requests.head solution is a subset of trying to download the files and having it error out, so could be a stop-gap solution but with the added requests coming from trying to download it, which we'd want to mitigate.

Thanks for the prompt response on this!

Sign up or log in to comment