Using tokenizers from π€ TokenizersΒΆ
The PreTrainedTokenizerFast
depends on the tokenizers library. The tokenizers obtained from the π€ Tokenizers library can be
loaded very simply into π€ Transformers.
Before getting in the specifics, letβs first start by creating a dummy tokenizer in a few lines:
>>> from tokenizers import Tokenizer
>>> from tokenizers.models import BPE
>>> from tokenizers.trainers import BpeTrainer
>>> from tokenizers.pre_tokenizers import Whitespace
>>> tokenizer = Tokenizer(BPE(unk_token="[UNK]"))
>>> trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
>>> tokenizer.pre_tokenizer = Whitespace()
>>> files = [...]
>>> tokenizer.train(files, trainer)
We now have a tokenizer trained on the files we defined. We can either continue using it in that runtime, or save it to a JSON file for future re-use.
Loading directly from the tokenizer objectΒΆ
Letβs see how to leverage this tokenizer object in the π€ Transformers library. The
PreTrainedTokenizerFast
class allows for easy instantiation, by accepting the instantiated
tokenizer object as an argument:
>>> from transformers import PreTrainedTokenizerFast
>>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer)
This object can now be used with all the methods shared by the π€ Transformers tokenizers! Head to the tokenizer page for more information.
Loading from a JSON fileΒΆ
In order to load a tokenizer from a JSON file, letβs first start by saving our tokenizer:
>>> tokenizer.save("tokenizer.json")
The path to which we saved this file can be passed to the PreTrainedTokenizerFast
initialization
method using the tokenizer_file
parameter:
>>> from transformers import PreTrainedTokenizerFast
>>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json")
This object can now be used with all the methods shared by the π€ Transformers tokenizers! Head to the tokenizer page for more information.