Tokenizer

The base class PreTrainedTokenizer implements the common methods for loading/saving a tokenizer either from a local file or directory, or from a pretrained tokenizer provided by the library (downloaded from HuggingFace’s AWS S3 repository).

PreTrainedTokenizer is the main entry point into tokenizers as it also implements the main methods for using all the tokenizers:

  • tokenizing, converting tokens to ids and back and encoding/decoding,

  • adding new tokens to the vocabulary in a way that is independant of the underlying structure (BPE, SentencePiece…),

  • managing special tokens (adding them, assigning them to roles, making sure they are not split during tokenization)

PreTrainedTokenizer

class transformers.PreTrainedTokenizer(max_len=None, **kwargs)[source]

Base class for all tokenizers. Handle all the shared methods for tokenization and special tokens as well as methods dowloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary.

This class also contain the added tokens in a unified way on top of all tokenizers so we don’t have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece…).

Class attributes (overridden by derived classes):

  • vocab_files_names: a python dict with, as keys, the __init__ keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string).

  • pretrained_vocab_files_map: a python dict of dict the high-level keys being the __init__ keyword name of each vocabulary file required by the model, the low-level being the short-cut-names (string) of the pretrained models with, as associated values, the url (string) to the associated pretrained vocabulary file.

  • max_model_input_sizes: a python dict with, as keys, the short-cut-names (string) of the pretrained models, and as associated values, the maximum length of the sequence inputs of this model, or None if the model has no maximum input size.

  • pretrained_init_configuration: a python dict with, as keys, the short-cut-names (string) of the pretrained models, and as associated values, a dictionnary of specific arguments to pass to the __init__``method of the tokenizer class for this pretrained model when loading the tokenizer with the ``from_pretrained() method.

Parameters
  • bos_token (-) – (Optional) string: a beginning of sentence token. Will be associated to self.bos_token and self.bos_token_id

  • eos_token (-) – (Optional) string: an end of sentence token. Will be associated to self.eos_token and self.eos_token_id

  • unk_token (-) – (Optional) string: an unknown token. Will be associated to self.unk_token and self.unk_token_id

  • sep_token (-) – (Optional) string: a separation token (e.g. to separate context and query in an input sequence). Will be associated to self.sep_token and self.sep_token_id

  • pad_token (-) – (Optional) string: a padding token. Will be associated to self.pad_token and self.pad_token_id

  • cls_token (-) – (Optional) string: a classification token (e.g. to extract a summary of an input sequence leveraging self-attention along the full depth of the model). Will be associated to self.cls_token and self.cls_token_id

  • mask_token (-) – (Optional) string: a masking token (e.g. when training a model with masked-language modeling). Will be associated to self.mask_token and self.mask_token_id

  • additional_special_tokens (-) – (Optional) list: a list of additional special tokens. Adding all special tokens here ensure they won’t be split by the tokenization process. Will be associated to self.additional_special_tokens and self.additional_special_tokens_ids

add_special_tokens(special_tokens_dict)[source]

Add a dictionary of special tokens (eos, pad, cls…) to the encoder and link them to class attributes. If special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the current vocabulary).

Using add_special_tokens will ensure your special tokens can be used in several ways:

  • special tokens are carefully handled by the tokenizer (they are never split)

  • you can easily refer to special tokens using tokenizer class attributes like tokenizer.cls_token. This makes it easy to develop model-agnostic training and fine-tuning scripts.

When possible, special tokens are already registered for provided pretrained models (ex: BertTokenizer cls_token is already registered to be ‘[CLS]’ and XLM’s one is also registered to be ‘</s>’)

Parameters

special_tokens_dict

dict of string. Keys should be in the list of predefined special attributes: [bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens].

Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer assign the index of the unk_token to them).

Returns

Number of tokens added to the vocabulary.

Examples:

# Let's see how to add a new classification token to GPT-2
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')

special_tokens_dict = {'cls_token': '<CLS>'}

num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
print('We have added', num_added_toks, 'tokens')
model.resize_token_embeddings(len(tokenizer))  # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer.

assert tokenizer.cls_token == '<CLS>'
add_tokens(new_tokens)[source]

Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary.

Parameters

new_tokens – list of string. Each string is a token to add. Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer assign the index of the unk_token to them).

Returns

Number of tokens added to the vocabulary.

Examples:

# Let's see how to increase the vocabulary of Bert model and tokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')

num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2'])
print('We have added', num_added_toks, 'tokens')
model.resize_token_embeddings(len(tokenizer))  # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer.
property additional_special_tokens

All the additional special tokens you may want to use (list of strings). Log an error if used while not having been set.

property additional_special_tokens_ids

Ids of all the additional special tokens in the vocabulary (list of integers). Log an error if used while not having been set.

property all_special_ids

List the vocabulary indices of the special tokens (‘<unk>’, ‘<cls>’…) mapped to class attributes (cls_token, unk_token…).

property all_special_tokens

List all the special tokens (‘<unk>’, ‘<cls>’…) mapped to class attributes (cls_token, unk_token…).

property bos_token

Beginning of sentence token (string). Log an error if used while not having been set.

property bos_token_id

Id of the beginning of sentence token in the vocabulary. Log an error if used while not having been set.

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[source]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A RoBERTa sequence has the following format:

single sequence: <s> X </s> pair of sequences: <s> A </s></s> B </s>

static clean_up_tokenization(out_string)[source]

Clean up a list of simple English tokenization artifacts like spaces before punctuations and abreviated forms.

property cls_token

Classification token (string). E.g. to extract a summary of an input sequence leveraging self-attention along the full depth of the model. Log an error if used while not having been set.

property cls_token_id

Id of the classification token in the vocabulary. E.g. to extract a summary of an input sequence leveraging self-attention along the full depth of the model. Log an error if used while not having been set.

convert_ids_to_tokens(ids, skip_special_tokens=False)[source]

Converts a single index or a sequence of indices (integers) in a token ” (resp.) a sequence of tokens (str/unicode), using the vocabulary and added tokens.

Parameters

skip_special_tokens – Don’t decode special tokens (self.all_special_tokens). Default: False

convert_tokens_to_ids(tokens)[source]

Converts a single token, or a sequence of tokens, (str/unicode) in a single integer id (resp. a sequence of ids), using the vocabulary.

convert_tokens_to_string(tokens)[source]

Converts a sequence of tokens (string) in a single string. The most simple way to do it is ‘ ‘.join(self.convert_ids_to_tokens(token_ids)) but we often want to remove sub-word tokenization artifacts at the same time.

decode(token_ids, skip_special_tokens=False, clean_up_tokenization_spaces=True)[source]

Converts a sequence of ids (integer) in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces. Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).

Parameters
  • token_ids – list of tokenized input ids. Can be obtained using the encode or encode_plus methods.

  • skip_special_tokens – if set to True, will replace special tokens.

  • clean_up_tokenization_spaces – if set to True, will clean up the tokenization spaces.

encode(text, text_pair=None, add_special_tokens=True, max_length=None, stride=0, truncation_strategy='longest_first', return_tensors=None, **kwargs)[source]

Converts a string in a sequence of ids (integer), using the tokenizer and vocabulary.

Same as doing self.convert_tokens_to_ids(self.tokenize(text)).

Parameters
  • text – The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method)

  • text_pair – Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method)

  • add_special_tokens – if set to True, the sequences will be encoded with the special tokens relative to their model.

  • max_length – if set to a number, will limit the total sequence returned so that it has a maximum length. If there are overflowing tokens, those will be added to the returned dictionary

  • stride – if set to a number along with max_length, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.

  • truncation_strategy

    string selected in the following options: - ‘longest_first’ (default) Iteratively reduce the inputs sequence until the input is under max_length

    starting from the longest one at each token (when there is a pair of input sequences)

    • ’only_first’: Only truncate the first sequence

    • ’only_second’: Only truncate the second sequence

    • ’do_not_truncate’: Does not truncate (raise an error if the input sequence is longer than max_length)

  • return_tensors – (optional) can be set to ‘tf’ or ‘pt’ to return respectively TensorFlow tf.constant or PyTorch torch.Tensor instead of a list of python integers.

  • **kwargs – passed to the self.tokenize() method

encode_plus(text, text_pair=None, add_special_tokens=True, max_length=None, stride=0, truncation_strategy='longest_first', return_tensors=None, **kwargs)[source]

Returns a dictionary containing the encoded sequence or sequence pair and additional informations: the mask for sequence classification and the overflowing elements if a max_length is specified.

Parameters
  • text – The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method)

  • text_pair – Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method)

  • add_special_tokens – if set to True, the sequences will be encoded with the special tokens relative to their model.

  • max_length – if set to a number, will limit the total sequence returned so that it has a maximum length. If there are overflowing tokens, those will be added to the returned dictionary

  • stride – if set to a number along with max_length, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.

  • truncation_strategy

    string selected in the following options: - ‘longest_first’ (default) Iteratively reduce the inputs sequence until the input is under max_length

    starting from the longest one at each token (when there is a pair of input sequences)

    • ’only_first’: Only truncate the first sequence

    • ’only_second’: Only truncate the second sequence

    • ’do_not_truncate’: Does not truncate (raise an error if the input sequence is longer than max_length)

  • return_tensors – (optional) can be set to ‘tf’ or ‘pt’ to return respectively TensorFlow tf.constant or PyTorch torch.Tensor instead of a list of python integers.

  • **kwargs – passed to the self.tokenize() method

property eos_token

End of sentence token (string). Log an error if used while not having been set.

property eos_token_id

Id of the end of sentence token in the vocabulary. Log an error if used while not having been set.

classmethod from_pretrained(*inputs, **kwargs)[source]

Instantiate a PreTrainedTokenizer (or a derived class) from a predefined tokenizer.

Parameters
  • pretrained_model_name_or_path

    either:

    • a string with the shortcut name of a predefined tokenizer to load from cache or download, e.g.: bert-base-uncased.

    • a path to a directory containing vocabulary files required by the tokenizer, for instance saved using the save_pretrained() method, e.g.: ./my_model_directory/.

    • (not applicable to all derived classes) a path or url to a single saved vocabulary file if and only if the tokenizer only requires a single vocabulary file (e.g. Bert, XLNet), e.g.: ./my_model_directory/vocab.txt.

  • cache_dir – (optional) string: Path to a directory in which a downloaded predefined tokenizer vocabulary files should be cached if the standard cache should not be used.

  • force_download – (optional) boolean, default False: Force to (re-)download the vocabulary files and override the cached versions if they exists.

  • proxies – (optional) dict, default None: A dictionary of proxy servers to use by protocol or endpoint, e.g.: {‘http’: ‘foo.bar:3128’, ‘http://hostname’: ‘foo.bar:4012’}. The proxies are used on each request.

  • inputs – (optional) positional arguments: will be passed to the Tokenizer __init__ method.

  • kwargs – (optional) keyword arguments: will be passed to the Tokenizer __init__ method. Can be used to set special tokens like bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens. See parameters in the doc string of PreTrainedTokenizer for details.

Examples:

# We can't instantiate directly the base class `PreTrainedTokenizer` so let's show our examples on a derived class: BertTokenizer

# Download vocabulary from S3 and cache.
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

# If vocabulary files are in a directory (e.g. tokenizer was saved using `save_pretrained('./test/saved_model/')`)
tokenizer = BertTokenizer.from_pretrained('./test/saved_model/')

# If the tokenizer uses a single vocabulary file, you can point directly to this file
tokenizer = BertTokenizer.from_pretrained('./test/saved_model/my_vocab.txt')

# You can link tokens to special vocabulary when instantiating
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', unk_token='<unk>')
# You should be sure '<unk>' is in the vocabulary when doing that.
# Otherwise use tokenizer.add_special_tokens({'unk_token': '<unk>'}) instead)
assert tokenizer.unk_token == '<unk>'
get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[source]

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods.

Parameters
  • token_ids_0 – list of ids (must not contain special tokens)

  • token_ids_1 – Optional list of ids (must not contain special tokens), necessary when fetching sequence ids for sequence pairs

  • already_has_special_tokens – (default False) Set to True if the token list is already formated with special tokens for the model

Returns

1 for a special token, 0 for a sequence token.

Return type

A list of integers in the range [0, 1]

property mask_token

Mask token (string). E.g. when training a model with masked-language modeling. Log an error if used while not having been set.

property mask_token_id

Id of the mask token in the vocabulary. E.g. when training a model with masked-language modeling. Log an error if used while not having been set.

num_added_tokens(pair=False)[source]

Returns the number of added tokens when encoding a sequence with special tokens.

Note

This encodes inputs and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.

Parameters

pair – Returns the number of added tokens in the case of a sequence pair if set to True, returns the number of added tokens in the case of a single sequence if set to False.

Returns

Number of tokens added to sequences

property pad_token

Padding token (string). Log an error if used while not having been set.

property pad_token_id

Id of the padding token in the vocabulary. Log an error if used while not having been set.

prepare_for_model(ids, pair_ids=None, max_length=None, add_special_tokens=True, stride=0, truncation_strategy='longest_first', return_tensors=None)[source]

Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and manages a window stride for overflowing tokens

Parameters
  • ids – list of tokenized input ids. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.

  • pair_ids – Optional second list of input ids. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.

  • max_length – maximum length of the returned list. Will truncate by taking into account the special tokens.

  • add_special_tokens – if set to True, the sequences will be encoded with the special tokens relative to their model.

  • stride – window stride for overflowing tokens. Can be useful for edge effect removal when using sequential list of inputs.

  • truncation_strategy

    string selected in the following options: - ‘longest_first’ (default) Iteratively reduce the inputs sequence until the input is under max_length

    starting from the longest one at each token (when there is a pair of input sequences)

    • ’only_first’: Only truncate the first sequence

    • ’only_second’: Only truncate the second sequence

    • ’do_not_truncate’: Does not truncate (raise an error if the input sequence is longer than max_length)

  • return_tensors – (optional) can be set to ‘tf’ or ‘pt’ to return respectively TensorFlow tf.constant or PyTorch torch.Tensor instead of a list of python integers.

Returns

A Dictionary of shape:

{
    input_ids: list[int],
    overflowing_tokens: list[int] if a ``max_length`` is specified, else None
    special_tokens_mask: list[int] if ``add_special_tokens`` if set to ``True``
}
With the fields:

input_ids: list of tokens to be fed to a model

overflowing_tokens: list of overflowing tokens if a max length is specified.

special_tokens_mask: if adding special tokens, this is a list of [0, 1], with 0 specifying special added tokens and 1 specifying sequence tokens.

save_pretrained(save_directory)[source]
Save the tokenizer vocabulary files together with:
  • added tokens,

  • special-tokens-to-class-attributes-mapping,

  • tokenizer instantiation positional and keywords inputs (e.g. do_lower_case for Bert).

This won’t save modifications other than (added tokens and special token mapping) you may have applied to the tokenizer after the instantiation (e.g. modifying tokenizer.do_lower_case after creation).

This method make sure the full tokenizer can then be re-loaded using the from_pretrained() class method.

save_vocabulary(save_directory)[source]

Save the tokenizer vocabulary to a directory. This method does NOT save added tokens and special token mappings.

Please use save_pretrained() () to save the full Tokenizer state if you want to reload it using the from_pretrained() class method.

property sep_token

Separation token (string). E.g. separate context and query in an input sequence. Log an error if used while not having been set.

property sep_token_id

Id of the separation token in the vocabulary. E.g. separate context and query in an input sequence. Log an error if used while not having been set.

property special_tokens_map

A dictionary mapping special token class attribute (cls_token, unk_token…) to their values (‘<unk>’, ‘<cls>’…)

tokenize(text, **kwargs)[source]

Converts a string in a sequence of tokens (string), using the tokenizer. Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces).

Take care of added tokens.

truncate_sequences(ids, pair_ids=None, num_tokens_to_remove=0, truncation_strategy='longest_first', stride=0)[source]

Truncates a sequence pair in place to the maximum length. truncation_strategy: string selected in the following options:

  • ‘longest_first’ (default) Iteratively reduce the inputs sequence until the input is under max_length

    starting from the longest one at each token (when there is a pair of input sequences). Overflowing tokens only contains overflow from the first sequence.

  • ‘only_first’: Only truncate the first sequence. raise an error if the first sequence is shorter or equal to than num_tokens_to_remove.

  • ‘only_second’: Only truncate the second sequence

  • ‘do_not_truncate’: Does not truncate (raise an error if the input sequence is longer than max_length)

property unk_token

Unknown token (string). Log an error if used while not having been set.

property unk_token_id

Id of the unknown token in the vocabulary. Log an error if used while not having been set.

vocab_size()[source]

Size of the base vocabulary (without the added tokens)