Positional embedding contains the False padding index `1`

#3
by Phando - opened

Suppose this could result from the vocab converting, since the original tokenizer set the padding index to 1 while this repo's GPT2Tokenizer has a padding index of 50258

Sign up or log in to comment