Fast Tokenizer Required

#2
by ShortText - opened

Is there any Fast Tokenizer method for Japanese BERT?
For large datasets, tokenization is failing due to memory error.

Sign up or log in to comment