Tokenizers documentation

Tokenizers

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.13.4.rc2).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Tokenizers

Fast State-of-the-art tokenizers, optimized for both research and production

🤗 Tokenizers provides an implementation of today’s most used tokenizers, with a focus on performance and versatility. These tokenizers are also used in 🤗 Transformers.

Main features:

  • Train new vocabularies and tokenize, using today’s most used tokenizers.
  • Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server’s CPU.
  • Easy to use, but also extremely versatile.
  • Designed for both research and production.
  • Full alignment tracking. Even with destructive normalization, it’s always possible to get the part of the original sentence that corresponds to any token.
  • Does all the pre-processing: Truncation, Padding, add the special tokens your model needs.