Tokenizers documentation

Training from memory

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.20.3).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Training from memory

In the Quicktour, we saw how to build and train a tokenizer using text files, but we can actually use any Python Iterator. In this section we’ll see a few different ways of training our tokenizer.

For all the examples listed below, we’ll use the same Tokenizer and Trainer, built as following:

from tokenizers import Tokenizer, decoders, models, normalizers, pre_tokenizers, trainers
tokenizer = Tokenizer(models.Unigram())
tokenizer.normalizer = normalizers.NFKC()
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()
tokenizer.decoder = decoders.ByteLevel()
trainer = trainers.UnigramTrainer(
    vocab_size=20000,
    initial_alphabet=pre_tokenizers.ByteLevel.alphabet(),
    special_tokens=["<PAD>", "<BOS>", "<EOS>"],
)

This tokenizer is based on the Unigram model. It takes care of normalizing the input using the NFKC Unicode normalization method, and uses a ByteLevel pre-tokenizer with the corresponding decoder.

For more information on the components used here, you can check here.

The most basic way

As you probably guessed already, the easiest way to train our tokenizer is by using a List{.interpreted-text role=“obj”}:

# First few lines of the "Zen of Python" https://www.python.org/dev/peps/pep-0020/
data = [
    "Beautiful is better than ugly."
    "Explicit is better than implicit."
    "Simple is better than complex."
    "Complex is better than complicated."
    "Flat is better than nested."
    "Sparse is better than dense."
    "Readability counts."
]
tokenizer.train_from_iterator(data, trainer=trainer)

Easy, right? You can use anything working as an iterator here, be it a List{.interpreted-text role=“obj”}, Tuple{.interpreted-text role=“obj”}, or a np.Array{.interpreted-text role=“obj”}. Anything works as long as it provides strings.

Using the 🤗 Datasets library

An awesome way to access one of the many datasets that exist out there is by using the 🤗 Datasets library. For more information about it, you should check the official documentation here.

Let’s start by loading our dataset:

import datasets
dataset = datasets.load_dataset("wikitext", "wikitext-103-raw-v1", split="train+test+validation")

The next step is to build an iterator over this dataset. The easiest way to do this is probably by using a generator:

def batch_iterator(batch_size=1000):
    # Only keep the text column to avoid decoding the rest of the columns unnecessarily
    tok_dataset = dataset.select_columns("text")
    for batch in tok_dataset.iter(batch_size):
        yield batch["text"]

As you can see here, for improved efficiency we can actually provide a batch of examples used to train, instead of iterating over them one by one. By doing so, we can expect performances very similar to those we got while training directly from files.

With our iterator ready, we just need to launch the training. In order to improve the look of our progress bars, we can specify the total length of the dataset:

tokenizer.train_from_iterator(batch_iterator(), trainer=trainer, length=len(dataset))

And that’s it!

Using gzip files

Since gzip files in Python can be used as iterators, it is extremely simple to train on such files:

import gzip
with gzip.open("data/my-file.0.gz", "rt") as f:
    tokenizer.train_from_iterator(f, trainer=trainer)

Now if we wanted to train from multiple gzip files, it wouldn’t be much harder:

files = ["data/my-file.0.gz", "data/my-file.1.gz", "data/my-file.2.gz"]
def gzip_iterator():
    for path in files:
        with gzip.open(path, "rt") as f:
            for line in f:
                yield line
tokenizer.train_from_iterator(gzip_iterator(), trainer=trainer)

And voilà!

< > Update on GitHub