Fast State-of-the-art tokenizers, optimized for both research and production
Train new vocabularies and tokenize, using today’s most used tokenizers.
Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server’s CPU.
Easy to use, but also extremely versatile.
Designed for both research and production.
Full alignment tracking. Even with destructive normalization, it’s always possible to get the part of the original sentence that corresponds to any token.
Does all the pre-processing: Truncation, Padding, add the special tokens your model needs.
- The tokenization pipeline