Add full sequences (beyond the first 64 tokens)

#1
by pietrolesci - opened
EleutherAI org

Hi @usvsnsp , thanks for sharing this dataset!

I am trying to randomly sample sequences from the tokenized The Pile corpus without downloading the entire dataset first. This dataset seems exactly like what I would need but it's only reporting 64 tokens (that as @stellaathena mentioned on the Discord channel) were used for a memorisation project. It would be great to be able to retrieve the full sequences (2049 tokens).

Normally, I would do this with DuckDB + HuggingFace or by streaming the dataset. However, it appears that the formats are not compatible with either option. Any suggestions on how to do this without downloading and converting EleutherAI/pile-standard-pythia-preshuffled?

Thanks a lot in advance for your consideration!

EleutherAI org

Hey @pietrolesci
Thank you for showing interest in this dataset!

If you would like to retrive full sequences, I would recommend following batch viewer and instead using a single using a single split of bin file if you are constrained on memory / storage space.

I go by handle "sai_prashanth" (Orz) on discord. Do let me know if you need further help.

Looking forward to see the research being done on those sequences!

pietrolesci changed discussion status to closed

HI @usvsnsp ,

Thanks a lot for your reply!

I saw the batch viewer section yesterday, but it seems that I still need to download the full EleutherAI/pile-standard-pythia-preshuffled repo (which is quite many GBs of data: +20 files of 30GB each). As discussed on Discord (which I am happy to move the conversation to if preferred), I was thinking that creating a dataset on HF would allow resource-constrained users to explore the data via DuckDB + HF. An HF dataset with the following schema (backed by parquet files) could be the central place to interact with the pile and would enable users to "stream" the dataset directly from HF.

config_name: raw
features:
    uid: (int) Unique document identifier for each document.
    text: (str) Text.
    meta: (dict) Metadata of the data instance.
    is_duplicated: (bool) Flag to identify duplicated documents which avoids having a "dedup" copy. 

config_name: pythia
features:
    seq_id: (int) Unique sequence identifier reflecting the order in which the data have been seen during training
    tokens: (list) The pre-tokenized data    

I tried looking into creating this myself, but it is resource-intensive and I am starting to think that it is not feasible on my end. Curious to know what you think about it and how difficult this would be on EleutherAI's end.

Thanks a lot for your consideration!

P.S.: is the code to create EleutherAI/pile-duped-pythia-random-sampled available somewhere? It might be useful to try to re-implement this myself :)

pietrolesci changed discussion status to open
EleutherAI org
This comment has been hidden

Sign up or log in to comment