Sequence "packing" logic

#2
by pietrolesci - opened
EleutherAI org

Hi there,

I am trying to understand how the tokenized Pile dataset has been "packed" into sequences of fixed length (2049 tokens). I started wondering about this when noticing (from a small sample) that the sequences are not concatenated using the <EOS> token, which is something I though was done by default.

Thanks a lot for your help!

pietrolesci changed discussion title from Data preparation implementation to Sequence "packing" logic
EleutherAI org

Sign up or log in to comment