Datasets:

ArXiv:
Tags:

Missing duplicates parquet files

#18
by bebensee - opened

Thanks for sharing and making this huge dataset available for research!

Most of the downloads have been working smoothly but for several of the duplicates download URLs I'm seeing a 404 on my end. For example:

--2023-11-29 07:07:27--  https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/1568/it_head.duplicates.parquet
Connecting to 75.17.107.42:8080... connected.
Proxy request sent, awaiting response... 404 Not Found
2023-11-29 07:07:27 ERROR 404: Not Found.

The following is a list of all URLs that have failed for me out of the 2022-33, 2022-40, 2022-49, 2023-06, 2023-14 snapshots:

https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/1568/it_head.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/2177/de_head.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/2277/it_head.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/1317/en_head.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/4915/en_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/4405/fr_head.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/1612/fr_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/1042/en_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/2310/fr_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/4499/it_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/2861/de_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/2270/en_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/4583/es_head.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/1785/en_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/2010/fr_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/2752/es_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-14/0124/en_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-06/0560/en_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-06/0577/en_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-06/3995/en_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-06/1959/de_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-06/0171/it_head.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-06/3378/en_middle.duplicates.parquet

I think these files might also be corrupt:

https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-06/0708/en_middle.duplicates.parquet
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-06/2750/en_middle.duplicates.parquet

When trying to load these with the datasets library likeload_dataset("parquet", data_files=[fn1, fn2, ...]) I'm getting the following error which remains after re-download:

pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
Together org

Hi @bebensee thanks for pointing this out!

It is possible that for some shards there is no associated duplicate file, which happens if all documents in that shard are either unique, or they are the first appearing document from any cluster of documents.

Regarding the corrupted files -- I can load both of them using polars or pandas. For example, running

import io
import requests
import polars as pl

urls = [
    "https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-06/0708/en_middle.duplicates.parquet",
    "https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-06/2750/en_middle.duplicates.parquet"
]

for url in urls:
    response = requests.get(url)
    df = pl.read_parquet(io.BytesIO(response.content))
    print(url, len(df))

returns

https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-06/0708/en_middle.duplicates.parquet 3796
https://data.together.xyz/redpajama-data-v2/v1.0.0/duplicates/2023-06/2750/en_middle.duplicates.parquet 3857

So I think the files are valid parquet files and it might be an issue with either (1) an corrupted file download (so the magic bytes are missing), or (2) an issue with the hf dataloader.

Hey @mauriceweber to follow up on this on the topic of files that're missing duplicates. It makes for a poor user experience via huggingface when loading the dataset and having the process crash because it tries to download a file that doesn't exist. I looked through the download code inside download_manager.pyand it doesn't look like there's any space for exception handling/params to continue with the rest of the files when trying to download a list of urls. I didn't see anything that can be set as a download_config option either that can be passed to HF loader.

Looking at the duplicate code that was added earlier this week, I'm hoping to see if it's possible to add some checks around duplicate urls to verify they exist before downloading such as below

(inside _split_generators_sample inside RedPajama-Data-V2.py)

                dupe_url = os.path.join(
                        _URL_BASE, f"duplicates/{lst_id}.duplicates.parquet"
                    )
                
                dupe_response = requests.head(dupe_url, allow_redirects=True)

                if dupe_response.status_code == 404:
                    continue

                duplicates_ids_urls[part].append(
                    dupe_url
                )

or try downloading the files individually rather than in bulk with a try except surrounding it. If i'm understanding correctly based on how huggingface's download manager works, downloading the rp2 dataset wouldn't miss out on the parallelization so moving from bulk downloads to single downloads wouldn't introduce latency.

That way the snapshot can continue to download without being hampered by a parquet file that doesn't exist.

Another potential solution that i haven't investigated the LOE on is importing the duplicates index file and checking to see if the listing is inside there - that mitigates the extra HTTP hop for head or try/catch request.

Together org

you're right, thanks for catching this! I missed that the download_manager doesn't handle those errors (which I guess makes sense) -- I reverted the dedupe flag for now since this needs more work. I think your final suggestion is probably the cleanest way so that we minimize the number of requests. I'd also use an index that contains all missing listings to keep it small. What do you think?

I think the final solution is the "best" in terms of trade offs of using the deduped files as a source of truth rather than guessing and checking, while mitigating extraneous requests. I think keeping an index is brilliant since this data won't change and is static over time. If I had to guess, there's less files missing, so it'd be a smaller index.

I'd imagine the requests.head solution is a subset of trying to download the files and having it error out, so could be a stop-gap solution but with the added requests coming from trying to download it, which we'd want to mitigate.

Thanks for the prompt response on this!

Sign up or log in to comment