Tokenizer

A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most of the tokenizers are available in two flavors: a full python implementation and a “Fast” implementation based on the Rust library tokenizers. The “Fast” implementations allows:

  1. a significant speed-up in particular when doing batched tokenization and

  2. additional methods to map between the original string (character and words) and the token space (e.g. getting the index of the token comprising a given character or the span of characters corresponding to a given token). Currently no “Fast” implementation is available for the SentencePiece-based tokenizers (for T5, ALBERT, CamemBERT, XLMRoBERTa and XLNet models).

The base classes PreTrainedTokenizer and PreTrainedTokenizerFast implement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and “Fast” tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library (downloaded from HuggingFace’s AWS S3 repository). They both rely on PreTrainedTokenizerBase that contains the common methods, and SpecialTokensMixin.

PreTrainedTokenizer and PreTrainedTokenizerFast thus implement the main methods for using all the tokenizers:

  • Tokenizing (splitting strings in sub-word token strings), converting tokens strings to ids and back, and encoding/decoding (i.e., tokenizing and converting to integers).

  • Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece…).

  • Managing special tokens (like mask, beginning-of-sentence, etc.): adding them, assigning them to attributes in the tokenizer for easy access and making sure they are not split during tokenization.

BatchEncoding holds the output of the tokenizer’s encoding methods (__call__, encode_plus and batch_encode_plus) and is derived from a Python dictionary. When the tokenizer is a pure python tokenizer, this class behaves just like a standard python dictionary and holds the various model inputs computed by these methods (input_ids, attention_mask…). When the tokenizer is a “Fast” tokenizer (i.e., backed by HuggingFace tokenizers library), this class provides in addition several advanced alignment methods which can be used to map between the original string (character and words) and the token space (e.g., getting the index of the token comprising a given character or the span of characters corresponding to a given token).

PreTrainedTokenizer

class transformers.PreTrainedTokenizer(**kwargs)[source]

Base class for all slow tokenizers.

Inherits from PreTrainedTokenizerBase.

Handle all the shared methods for tokenization and special tokens as well as methods downloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary.

This class also contain the added tokens in a unified way on top of all tokenizers so we don’t have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece…).

Class attributes (overridden by derived classes)
  • vocab_files_names (Dict[str, str]) – A ditionary with, as keys, the __init__ keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string).

  • pretrained_vocab_files_map (Dict[str, Dict[str, str]]) – A dictionary of dictionaries, with the high-level keys being the __init__ keyword name of each vocabulary file required by the model, the low-level being the short-cut-names of the pretrained models with, as associated values, the url to the associated pretrained vocabulary file.

  • max_model_input_sizes (Dict[str, Optinal[int]]) – A dictionary with, as keys, the short-cut-names of the pretrained models, and as associated values, the maximum length of the sequence inputs of this model, or None if the model has no maximum input size.

  • pretrained_init_configuration (Dict[str, Dict[str, Any]]) – A dictionary with, as keys, the short-cut-names of the pretrained models, and as associated values, a dictionnary of specific arguments to pass to the __init__ method of the tokenizer class for this pretrained model when loading the tokenizer with the from_pretrained() method.

  • model_input_names (List[str]) – A list of inputs expected in the forward pass of the model.

  • padding_side (str) – The default value for the side on which the model should have padding applied. Should be 'right' or 'left'.

Parameters
  • model_max_length (int, optional) – The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with from_pretrained(), this will be set to the value stored for the associated model in max_model_input_sizes (see above). If no value is provided, will default to VERY_LARGE_INTEGER (int(1e30)).

  • padding_side – (str, optional): The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.

  • model_input_names (List[string], optional) – The list of inputs accepted by the forward pass of the model (like "token_type_ids" or "attention_mask"). Default value is picked from the class attribute of the same name.

  • bos_token (str or tokenizers.AddedToken, optional) – A special token representing the beginning of a sentence. Will be associated to self.bos_token and self.bos_token_id.

  • eos_token (str or tokenizers.AddedToken, optional) – A special token representing the end of a sentence. Will be associated to self.eos_token and self.eos_token_id.

  • unk_token (str or tokenizers.AddedToken, optional) – A special token representing an out-of-vocabulary token. Will be associated to self.unk_token and self.unk_token_id.

  • sep_token (str or tokenizers.AddedToken, optional) – A special token separating two different sentences in the same input (used by BERT for instance). Will be associated to self.sep_token and self.sep_token_id.

  • pad_token (str or tokenizers.AddedToken, optional) – A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated to self.pad_token and self.pad_token_id.

  • cls_token (str or tokenizers.AddedToken, optional) – A special token representing the class of the input (used by BERT for instance). Will be associated to self.cls_token and self.cls_token_id.

  • mask_token (str or tokenizers.AddedToken, optional) – A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated to self.mask_token and self.mask_token_id.

  • additional_special_tokens (tuple or list of str or tokenizers.AddedToken, optional) – A tuple or a list of additional special tokens. Add them here to ensure they won’t be split by the tokenization process. Will be associated to self.additional_special_tokens and self.additional_special_tokens_ids.

__call__(text: Union[str, List[str], List[List[str]]], text_pair: Optional[Union[str, List[str], List[List[str]]]] = None, add_special_tokens: bool = True, padding: Union[bool, str, transformers.tokenization_utils_base.PaddingStrategy] = False, truncation: Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = False, max_length: Optional[int] = None, stride: int = 0, is_pretokenized: bool = False, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, transformers.tokenization_utils_base.TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs) → transformers.tokenization_utils_base.BatchEncoding

Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.

Parameters
  • text (str, List[str], List[List[str]]) – The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_pretokenized=True (to lift the ambiguity with a batch of sequences).

  • text_pair (str, List[str], List[List[str]]) – The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_pretokenized=True (to lift the ambiguity with a batch of sequences).

  • add_special_tokens (bool, optional, defaults to True) – Whether or not to encode the sequences with the special tokens relative to their model.

  • padding (bool, str or PaddingStrategy, optional, defaults to False) –

    Activates and controls padding. Accepts the following values:

    • True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).

    • 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

    • False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).

  • truncation (bool, str or TruncationStrategy, optional, defaults to False) –

    Activates and controls truncation. Accepts the following values:

    • True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.

    • 'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • 'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).

  • max_length (int, optional) –

    Controls the maximum length to use by one of the truncation/padding parameters.

    If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.

  • stride (int, optional, defaults to 0) – If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.

  • is_pretokenized (bool, optional, defaults to False) – Whether or not the input is already pre-tokenized (e.g., split into words), in which case the tokenizer will skip the pre-tokenization step. This is useful for NER or token classification.

  • pad_to_multiple_of (int, optional) – If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).

  • return_tensors (str or TensorType, optional) –

    If set, will return tensors instead of list of python integers. Acceptable values are:

    • 'tf': Return TensorFlow tf.constant objects.

    • 'pt': Return PyTorch torch.Tensor objects.

    • 'np': Return Numpy np.ndarray objects.

  • return_token_type_ids (bool, optional) –

    Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.

    What are token type IDs?

  • return_attention_mask (bool, optional) –

    Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.

    What are attention masks?

  • return_overflowing_tokens (bool, optional, defaults to False) – Whether or not to return overflowing token sequences.

  • return_special_tokens_mask (bool, optional, defaults to False) – Wheter or not to return special tokens mask information.

  • return_offsets_mapping (bool, optional, defaults to False) –

    Whether or not to return (char_start, char_end) for each token.

    This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using Python’s tokenizer, this method will raise NotImplementedError.

  • return_length (bool, optional, defaults to False) – Whether or not to return the lengths of the encoded inputs.

  • verbose (bool, optional, defaults to True) – Whether or not to print informations and warnings.

  • **kwargs – passed to the self.tokenize() method

Returns

A BatchEncoding with the following fields:

  • input_ids – List of token ids to be fed to a model.

    What are input IDs?

  • token_type_ids – List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).

    What are token type IDs?

  • attention_mask – List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).

    What are attention masks?

  • overflowing_tokens – List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).

  • num_truncated_tokens – Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).

  • special_tokens_mask – List of 0s and 1s, with 0 specifying added special tokens and 1 specifying regual sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).

  • length – The length of the inputs (when return_length=True)

Return type

BatchEncoding

convert_ids_to_tokens(ids: int, skip_special_tokens: bool = 'False') → str[source]
convert_ids_to_tokens(ids: List[int], skip_special_tokens: bool = 'False') → List[str]

Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens.

Parameters
  • ids (int or List[int]) – The token id (or token ids) to convert to tokens.

  • skip_special_tokens (bool, optional, defaults to False) – Whether or not to remove special tokens in the decoding.

Returns

The decoded token(s).

Return type

str or List[str]

convert_tokens_to_ids(tokens: Union[str, List[str]]) → Union[int, List[int]][source]

Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the vocabulary.

Parameters

token (str or List[str]) – One or several token(s) to convert to token id(s).

Returns

The token id or list of token ids.

Return type

int or List[int]

convert_tokens_to_string(tokens: List[str]) → str[source]

Converts a sequence of token ids in a single string.

The most simple way to do it is " ".join(tokens) but we often want to remove sub-word tokenization artifacts at the same time.

Parameters

tokens (List[str]) – The token to join in a string.

Return: The joined tokens.

decode(token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True) → str[source]

Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.

Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).

Parameters
  • token_ids (List[int]) – List of tokenized input ids. Can be obtained using the __call__ method.

  • skip_special_tokens (bool, optional, defaults to False) – Whether or not to remove special tokens in the decoding.

  • clean_up_tokenization_spaces (bool, optional, defaults to True) – Whether or not to clean up the tokenization spaces.

Returns

The decoded sentence.

Return type

str

get_added_vocab() → Dict[str, int][source]

Returns the added tokens in the vocabulary as a dictionary of token to index.

Returns

The added tokens.

Return type

Dict[str, int]

get_special_tokens_mask(token_ids_0: List, token_ids_1: Optional[List] = None, already_has_special_tokens: bool = False) → List[int][source]

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods.

Parameters
  • token_ids_0 (List[int]) – List of ids of the first sequence.

  • token_ids_1 (List[int], optional) – List of ids of the second sequence.

  • already_has_special_tokens (bool, optional, defaults to False) – Wheter or not the token list is already formated with special tokens for the model.

Returns

1 for a special token, 0 for a sequence token.

Return type

A list of integers in the range [0, 1]

get_vocab() → Dict[str, int][source]

Returns the vocabulary as a dictionary of token to index.

tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

Returns

The vocabulary.

Return type

Dict[str, int]

num_special_tokens_to_add(pair: bool = False) → int[source]

Returns the number of added tokens when encoding a sequence with special tokens.

Note

This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.

Parameters

pair (bool, optional, defaults to False) – Whether the number of added tokens should be computed in the case of a sequence pair or a single sequence.

Returns

Number of special tokens added to sequences.

Return type

int

prepare_for_tokenization(text: str, is_pretokenized: bool = False, **kwargs) → Tuple[str, Dict[str, Any]][source]

Performs any necessary transformations before tokenization.

This method should pop the arguments from kwargs and return the remaining kwargs as well. We test the kwargs at the end of the encoding process to be sure all the arguments have been used.

Parameters
  • test (str) – The text to prepare.

  • is_pretokenized (bool, optional, defaults to False) – Whether or not the text has been pretokenized.

  • kwargs – Keyword arguments to use for the tokenization.

Returns

The prepared text and the unused kwargs.

Return type

Tuple[str, Dict[str, Any]]

save_vocabulary(save_directory) → Tuple[str][source]

Save the tokenizer vocabulary to a directory. This method does NOT save added tokens and special token mappings.

Warning

Please use save_pretrained() to save the full tokenizer state if you want to reload it using the from_pretrained() class method.

Parameters

save_directory (str) – The path to adirectory where the tokenizer will be saved.

Returns

The files saved.

Return type

A tuple of str

tokenize(text: str, **kwargs) → List[str][source]

Converts a string in a sequence of tokens, using the tokenizer.

Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces). Takes care of added tokens.

Parameters
  • text (str) – The sequence to be encoded.

  • **kwargs (additional keyword arguments) – Passed along to the model-specific prepare_for_tokenization preprocessing method.

Returns

The list of tokens.

Return type

List[str]

property vocab_size

Size of the base vocabulary (without the added tokens).

Type

int

PreTrainedTokenizerFast

class transformers.PreTrainedTokenizerFast(tokenizer: tokenizers.implementations.base_tokenizer.BaseTokenizer, **kwargs)[source]

Base class for all fast tokenizers (wrapping HuggingFace tokenizers library).

Inherits from PreTrainedTokenizerBase.

Handles all the shared methods for tokenization and special tokens, as well as methods for downloading/caching/loading pretrained tokenizers, as well as adding tokens to the vocabulary.

This class also contains the added tokens in a unified way on top of all tokenizers so we don’t have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece…).

Class attributes (overridden by derived classes)
  • vocab_files_names (Dict[str, str]) – A ditionary with, as keys, the __init__ keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string).

  • pretrained_vocab_files_map (Dict[str, Dict[str, str]]) – A dictionary of dictionaries, with the high-level keys being the __init__ keyword name of each vocabulary file required by the model, the low-level being the short-cut-names of the pretrained models with, as associated values, the url to the associated pretrained vocabulary file.

  • max_model_input_sizes (Dict[str, Optinal[int]]) – A dictionary with, as keys, the short-cut-names of the pretrained models, and as associated values, the maximum length of the sequence inputs of this model, or None if the model has no maximum input size.

  • pretrained_init_configuration (Dict[str, Dict[str, Any]]) – A dictionary with, as keys, the short-cut-names of the pretrained models, and as associated values, a dictionnary of specific arguments to pass to the __init__ method of the tokenizer class for this pretrained model when loading the tokenizer with the from_pretrained() method.

  • model_input_names (List[str]) – A list of inputs expected in the forward pass of the model.

  • padding_side (str) – The default value for the side on which the model should have padding applied. Should be 'right' or 'left'.

Parameters
  • model_max_length (int, optional) – The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with from_pretrained(), this will be set to the value stored for the associated model in max_model_input_sizes (see above). If no value is provided, will default to VERY_LARGE_INTEGER (int(1e30)).

  • padding_side – (str, optional): The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.

  • model_input_names (List[string], optional) – The list of inputs accepted by the forward pass of the model (like "token_type_ids" or "attention_mask"). Default value is picked from the class attribute of the same name.

  • bos_token (str or tokenizers.AddedToken, optional) – A special token representing the beginning of a sentence. Will be associated to self.bos_token and self.bos_token_id.

  • eos_token (str or tokenizers.AddedToken, optional) – A special token representing the end of a sentence. Will be associated to self.eos_token and self.eos_token_id.

  • unk_token (str or tokenizers.AddedToken, optional) – A special token representing an out-of-vocabulary token. Will be associated to self.unk_token and self.unk_token_id.

  • sep_token (str or tokenizers.AddedToken, optional) – A special token separating two different sentences in the same input (used by BERT for instance). Will be associated to self.sep_token and self.sep_token_id.

  • pad_token (str or tokenizers.AddedToken, optional) – A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated to self.pad_token and self.pad_token_id.

  • cls_token (str or tokenizers.AddedToken, optional) – A special token representing the class of the input (used by BERT for instance). Will be associated to self.cls_token and self.cls_token_id.

  • mask_token (str or tokenizers.AddedToken, optional) – A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated to self.mask_token and self.mask_token_id.

  • additional_special_tokens (tuple or list of str or tokenizers.AddedToken, optional) – A tuple or a list of additional special tokens. Add them here to ensure they won’t be split by the tokenization process. Will be associated to self.additional_special_tokens and self.additional_special_tokens_ids.

__call__(text: Union[str, List[str], List[List[str]]], text_pair: Optional[Union[str, List[str], List[List[str]]]] = None, add_special_tokens: bool = True, padding: Union[bool, str, transformers.tokenization_utils_base.PaddingStrategy] = False, truncation: Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = False, max_length: Optional[int] = None, stride: int = 0, is_pretokenized: bool = False, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, transformers.tokenization_utils_base.TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs) → transformers.tokenization_utils_base.BatchEncoding

Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.

Parameters
  • text (str, List[str], List[List[str]]) – The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_pretokenized=True (to lift the ambiguity with a batch of sequences).

  • text_pair (str, List[str], List[List[str]]) – The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_pretokenized=True (to lift the ambiguity with a batch of sequences).

  • add_special_tokens (bool, optional, defaults to True) – Whether or not to encode the sequences with the special tokens relative to their model.

  • padding (bool, str or PaddingStrategy, optional, defaults to False) –

    Activates and controls padding. Accepts the following values:

    • True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).

    • 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.

    • False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).

  • truncation (bool, str or TruncationStrategy, optional, defaults to False) –

    Activates and controls truncation. Accepts the following values:

    • True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.

    • 'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • 'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.

    • False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).

  • max_length (int, optional) –

    Controls the maximum length to use by one of the truncation/padding parameters.

    If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.

  • stride (int, optional, defaults to 0) – If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.

  • is_pretokenized (bool, optional, defaults to False) – Whether or not the input is already pre-tokenized (e.g., split into words), in which case the tokenizer will skip the pre-tokenization step. This is useful for NER or token classification.

  • pad_to_multiple_of (int, optional) – If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).

  • return_tensors (str or TensorType, optional) –

    If set, will return tensors instead of list of python integers. Acceptable values are:

    • 'tf': Return TensorFlow tf.constant objects.

    • 'pt': Return PyTorch torch.Tensor objects.

    • 'np': Return Numpy np.ndarray objects.

  • return_token_type_ids (bool, optional) –

    Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.

    What are token type IDs?

  • return_attention_mask (bool, optional) –

    Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.

    What are attention masks?

  • return_overflowing_tokens (bool, optional, defaults to False) – Whether or not to return overflowing token sequences.

  • return_special_tokens_mask (bool, optional, defaults to False) – Wheter or not to return special tokens mask information.

  • return_offsets_mapping (bool, optional, defaults to False) –

    Whether or not to return (char_start, char_end) for each token.

    This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using Python’s tokenizer, this method will raise NotImplementedError.

  • return_length (bool, optional, defaults to False) – Whether or not to return the lengths of the encoded inputs.

  • verbose (bool, optional, defaults to True) – Whether or not to print informations and warnings.

  • **kwargs – passed to the self.tokenize() method

Returns

A BatchEncoding with the following fields:

  • input_ids – List of token ids to be fed to a model.

    What are input IDs?

  • token_type_ids – List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).

    What are token type IDs?

  • attention_mask – List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).

    What are attention masks?

  • overflowing_tokens – List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).

  • num_truncated_tokens – Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).

  • special_tokens_mask – List of 0s and 1s, with 0 specifying added special tokens and 1 specifying regual sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).

  • length – The length of the inputs (when return_length=True)

Return type

BatchEncoding

property backend_tokenizer

The Rust tokenizer used as a backend.

Type

tokenizers.implementations.BaseTokenizer

convert_ids_to_tokens(ids: Union[int, List[int]], skip_special_tokens: bool = False) → Union[str, List[str]][source]

Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens.

Parameters
  • ids (int or List[int]) – The token id (or token ids) to convert to tokens.

  • skip_special_tokens (bool, optional, defaults to False) – Whether or not to remove special tokens in the decoding.

Returns

The decoded token(s).

Return type

str or List[str]

convert_tokens_to_ids(tokens: Union[str, List[str]]) → Union[int, List[int]][source]

Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the vocabulary.

Parameters

token (str or List[str]) – One or several token(s) to convert to token id(s).

Returns

The token id or list of token ids.

Return type

int or List[int]

decode(token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True) → str[source]

Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.

Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).

Parameters
  • token_ids (List[int]) – List of tokenized input ids. Can be obtained using the __call__ method.

  • skip_special_tokens (bool, optional, defaults to False) – Whether or not to remove special tokens in the decoding.

  • clean_up_tokenization_spaces (bool, optional, defaults to True) – Whether or not to clean up the tokenization spaces.

Returns

The decoded sentence.

Return type

str

property decoder

The Rust decoder for this tokenizer.

Type

tokenizers.decoders.Decoder

get_added_vocab() → Dict[str, int][source]

Returns the added tokens in the vocabulary as a dictionary of token to index.

Returns

The added tokens.

Return type

Dict[str, int]

get_vocab() → Dict[str, int][source]

Returns the vocabulary as a dictionary of token to index.

tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

Returns

The vocabulary.

Return type

Dict[str, int]

num_special_tokens_to_add(pair: bool = False) → int[source]

Returns the number of added tokens when encoding a sequence with special tokens.

Note

This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.

Parameters

pair (bool, optional, defaults to False) – Whether the number of added tokens should be computed in the case of a sequence pair or a single sequence.

Returns

Number of special tokens added to sequences.

Return type

int

save_vocabulary(save_directory: str) → Tuple[str][source]

Save the tokenizer vocabulary to a directory. This method does NOT save added tokens and special token mappings.

Warning

Please use save_pretrained() to save the full tokenizer state if you want to reload it using the from_pretrained() class method.

Parameters

save_directory (str) – The path to adirectory where the tokenizer will be saved.

Returns

The files saved.

Return type

A tuple of str

set_truncation_and_padding(padding_strategy: transformers.tokenization_utils_base.PaddingStrategy, truncation_strategy: transformers.tokenization_utils_base.TruncationStrategy, max_length: int, stride: int, pad_to_multiple_of: Optional[int])[source]

Define the truncation and the padding strategies for fast tokenizers (provided by HuggingFace tokenizers library) and restore the tokenizer settings afterwards.

The provided tokenizer has no padding / truncation strategy before the managed section. If your tokenizer set a padding / truncation strategy before, then it will be reset to no padding / truncation when exiting the managed section.

Parameters
  • padding_strategy (PaddingStrategy) – The kind of padding that will be applied to the input

  • truncation_strategy (TruncationStrategy) – The kind of truncation that will be applied to the input

  • max_length (int) – The maximum size of a sequence.

  • stride (int) – The stride to use when handling overflow.

  • pad_to_multiple_of (int, optional) – If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).

tokenize(text: str, pair: Optional[str] = None, add_special_tokens: bool = False) → List[str][source]

Converts a string in a sequence of tokens, using the backend Rust tokenizer.

Parameters
  • text (str) – The sequence to be encoded.

  • pair (str, optional) – A second sequence to be encoded with the first.

  • add_special_tokens (bool, optional, defaults to False) – Whether or not to add the special tokens associated with the corresponding model.

Returns

The list of tokens.

Return type

List[str]

property vocab_size

Size of the base vocabulary (without the added tokens).

Type

int

BatchEncoding

class transformers.BatchEncoding(data: Optional[Dict[str, Any]] = None, encoding: Optional[Union[tokenizers.Encoding, Sequence[tokenizers.Encoding]]] = None, tensor_type: Union[None, str, transformers.tokenization_utils_base.TensorType] = None, prepend_batch_axis: bool = False)[source]

Holds the output of the encode_plus() and batch_encode() methods (tokens, attention_masks, etc).

This class is derived from a python dictionary and can be used as a dictionary. In addition, this class exposes utility methods to map from word/character space to token space.

Parameters
  • data (dict) – Dictionary of lists/arrays/tensors returned by the encode/batch_encode methods (‘input_ids’, ‘attention_mask’, etc.).

  • encoding (tokenizers.Encoding or Sequence[tokenizers.Encoding], optional) – If the tokenizer is a fast tokenizer which outputs additional informations like mapping from word/character space to token space the tokenizers.Encoding instance or list of instance (for batches) hold these informations.

  • tensor_type (Union[None, str, TensorType], optional) – You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization.

  • prepend_batch_axis (bool, optional, defaults to False) – Whether or not to add a batch axis when converting to tensors (see tensor_type above).

char_to_token(batch_or_char_index: int, char_index: Optional[int] = None) → int[source]

Get the index of the token in the encoded output comprising a character in the original string for a sequence of the batch.

Can be called as:

  • self.char_to_token(char_index) if batch size is 1

  • self.char_to_token(batch_index, char_index) if batch size is greater or equal to 1

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

Parameters
  • batch_or_char_index (int) – Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the word in the sequence

  • char_index (int, optional) – If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence.

Returns

Index of the token.

Return type

int

char_to_word(batch_or_char_index: int, char_index: Optional[int] = None) → int[source]

Get the word in the original string corresponding to a character in the original string of a sequence of the batch.

Can be called as:

  • self.char_to_word(char_index) if batch size is 1

  • self.char_to_word(batch_index, char_index) if batch size is greater than 1

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

Parameters
  • batch_or_char_index (int) – Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the character in the orginal string.

  • char_index (int, optional) – If a batch index is provided in batch_or_token_index, this can be the index of the character in the orginal string.

Returns

Index or indices of the associated encoded token(s).

Return type

int or List[int]

convert_to_tensors(tensor_type: Optional[Union[str, transformers.tokenization_utils_base.TensorType]] = None, prepend_batch_axis: bool = False)[source]

Convert the inner content to tensors.

Parameters
  • tensor_type (str or TensorType, optional) – The type of tensors to use. If str, should be one of the values of the enum TensorType. If None, no modification is done.

  • prepend_batch_axis (int, optional, defaults to False) – Whether or not to add the batch dimension during the conversion.

property encodings

The list all encodings from the tokenization process. Returns None if the input was tokenized through Python (i.e., not a fast) tokenizer.

Type

Optional[List[tokenizers.Encoding]]

property is_fast

Indicate whether this BatchEncoding was generated from the result of a PreTrainedTokenizerFast or not.

Type

bool

items() → a set-like object providing a view on D’s items[source]
keys() → a set-like object providing a view on D’s keys[source]
to(device: str)BatchEncoding[source]

Send all values to device by calling v.to(device) (PyTorch only).

Parameters

device (str or torch.device) – The device to put the tensors on.

Returns

The same instance of BatchEncoding after modification.

Return type

BatchEncoding

token_to_chars(batch_or_token_index: int, token_index: Optional[int] = None)transformers.tokenization_utils_base.CharSpan[source]

Get the character span corresponding to an encoded token in a sequence of the batch.

Character spans are returned as a CharSpan with:

  • start – Index of the first character in the original string associated to the token.

  • end – Index of the character following the last character in the original string associated to the token.

Can be called as:

  • self.token_to_chars(token_index) if batch size is 1

  • self.token_to_chars(batch_index, token_index) if batch size is greater or equal to 1

Parameters
  • batch_or_token_index (int) – Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the token in the sequence.

  • token_index (int, optional) – If a batch index is provided in batch_or_token_index, this can be the index of the token or tokens in the sequence.

Returns

Span of characters in the original string.

Return type

CharSpan

token_to_word(batch_or_token_index: int, token_index: Optional[int] = None) → int[source]

Get the index of the word corresponding (i.e. comprising) to an encoded token in a sequence of the batch.

Can be called as:

  • self.token_to_word(token_index) if batch size is 1

  • self.token_to_word(batch_index, token_index) if batch size is greater than 1

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e., words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

Parameters
  • batch_or_token_index (int) – Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the token in the sequence.

  • token_index (int, optional) – If a batch index is provided in batch_or_token_index, this can be the index of the token in the sequence.

Returns

Index of the word in the input sequence.

Return type

int

tokens(batch_index: int = 0) → List[str][source]

Return the list of tokens (sub-parts of the input strings after word/subword splitting and before converstion to integer indices) at a given batch index (only works for the output of a fast tokenizer).

Parameters

batch_index (int, optional, defaults to 0) – The index to access in the batch.

Returns

The list of tokens at that index.

Return type

List[str]

values() → an object providing a view on D’s values[source]
word_to_chars(batch_or_word_index: int, word_index: Optional[int] = None)transformers.tokenization_utils_base.CharSpan[source]

Get the character span in the original string corresponding to given word in a sequence of the batch.

Character spans are returned as a CharSpan NamedTuple with:

  • start: index of the first character in the original string

  • end: index of the character following the last character in the original string

Can be called as:

  • self.word_to_chars(word_index) if batch size is 1

  • self.word_to_chars(batch_index, word_index) if batch size is greater or equal to 1

Parameters
  • batch_or_word_index (int) – Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the word in the sequence

  • word_index (int, optional) – If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence.

Returns

Span(s) of the associated character or characters in the string. CharSpan are NamedTuple with:

  • start: index of the first character associated to the token in the original string

  • end: index of the character following the last character associated to the token in the original string

Return type

CharSpan or List[CharSpan]

word_to_tokens(batch_or_word_index: int, word_index: Optional[int] = None)transformers.tokenization_utils_base.TokenSpan[source]

Get the encoded token span corresponding to a word in the sequence of the batch.

Token spans are returned as a TokenSpan with:

  • start – Index of the first token.

  • end – Index of the token following the last token.

Can be called as:

  • self.word_to_tokens(word_index) if batch size is 1

  • self.word_to_tokens(batch_index, word_index) if batch size is greater or equal to 1

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

Parameters
  • batch_or_word_index (int) – Index of the sequence in the batch. If the batch only comprises one sequence, this can be the index of the word in the sequence.

  • word_index (int, optional) – If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence.

Returns

TokenSpan Span of tokens in the encoded sequence.

words(batch_index: int = 0) → List[Optional[int]][source]

Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.

Parameters

batch_index (int, optional, defaults to 0) – The index to access in the batch.

Returns

A list indicating the word corresponding to each token. Special tokens added by the tokenizer are mapped to None and other tokens are mapped to the index of their corresponding word (several tokens will be mapped to the same word index if they are parts of that word).

Return type

List[Optional[int]]