Utilities for Tokenizers¶
This page lists all the utility functions used by the tokenizers, mainly the class
PreTrainedTokenizerBase
that implements the common methods between
PreTrainedTokenizer
and PreTrainedTokenizerFast
and the mixin
SpecialTokensMixin
.
Most of those are only useful if you are studying the code of the tokenizers in the library.
PreTrainedTokenizerBase
¶
-
class
transformers.tokenization_utils_base.
PreTrainedTokenizerBase
(**kwargs)[source]¶ Base class for
PreTrainedTokenizer
andPreTrainedTokenizerFast
.Handles shared (mostly boiler plate) methods for those two classes.
- Class attributes (overridden by derived classes)
vocab_files_names (
Dict[str, str]
) – A ditionary with, as keys, the__init__
keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string).pretrained_vocab_files_map (
Dict[str, Dict[str, str]]
) – A dictionary of dictionaries, with the high-level keys being the__init__
keyword name of each vocabulary file required by the model, the low-level being theshort-cut-names
of the pretrained models with, as associated values, theurl
to the associated pretrained vocabulary file.max_model_input_sizes (
Dict[str, Optinal[int]]
) – A dictionary with, as keys, theshort-cut-names
of the pretrained models, and as associated values, the maximum length of the sequence inputs of this model, orNone
if the model has no maximum input size.pretrained_init_configuration (
Dict[str, Dict[str, Any]]
) – A dictionary with, as keys, theshort-cut-names
of the pretrained models, and as associated values, a dictionnary of specific arguments to pass to the__init__
method of the tokenizer class for this pretrained model when loading the tokenizer with thefrom_pretrained()
method.model_input_names (
List[str]
) – A list of inputs expected in the forward pass of the model.padding_side (
str
) – The default value for the side on which the model should have padding applied. Should be'right'
or'left'
.
- Parameters
model_max_length (
int
, optional) – The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded withfrom_pretrained()
, this will be set to the value stored for the associated model inmax_model_input_sizes
(see above). If no value is provided, will default to VERY_LARGE_INTEGER (int(1e30)
).padding_side – (
str
, optional): The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.model_input_names (
List[string]
, optional) – The list of inputs accepted by the forward pass of the model (like"token_type_ids"
or"attention_mask"
). Default value is picked from the class attribute of the same name.bos_token (
str
ortokenizers.AddedToken
, optional) – A special token representing the beginning of a sentence. Will be associated toself.bos_token
andself.bos_token_id
.eos_token (
str
ortokenizers.AddedToken
, optional) – A special token representing the end of a sentence. Will be associated toself.eos_token
andself.eos_token_id
.unk_token (
str
ortokenizers.AddedToken
, optional) – A special token representing an out-of-vocabulary token. Will be associated toself.unk_token
andself.unk_token_id
.sep_token (
str
ortokenizers.AddedToken
, optional) – A special token separating two different sentences in the same input (used by BERT for instance). Will be associated toself.sep_token
andself.sep_token_id
.pad_token (
str
ortokenizers.AddedToken
, optional) – A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated toself.pad_token
andself.pad_token_id
.cls_token (
str
ortokenizers.AddedToken
, optional) – A special token representing the class of the input (used by BERT for instance). Will be associated toself.cls_token
andself.cls_token_id
.mask_token (
str
ortokenizers.AddedToken
, optional) – A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated toself.mask_token
andself.mask_token_id
.additional_special_tokens (tuple or list of
str
ortokenizers.AddedToken
, optional) – A tuple or a list of additional special tokens. Add them here to ensure they won’t be split by the tokenization process. Will be associated toself.additional_special_tokens
andself.additional_special_tokens_ids
.
-
__call__
(text: Union[str, List[str], List[List[str]]], text_pair: Optional[Union[str, List[str], List[List[str]]]] = None, add_special_tokens: bool = True, padding: Union[bool, str, transformers.tokenization_utils_base.PaddingStrategy] = False, truncation: Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = False, max_length: Optional[int] = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, transformers.tokenization_utils_base.TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs) → transformers.tokenization_utils_base.BatchEncoding[source]¶ Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.
- Parameters
text (
str
,List[str]
,List[List[str]]
) – The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must setis_split_into_words=True
(to lift the ambiguity with a batch of sequences).text_pair (
str
,List[str]
,List[List[str]]
) – The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must setis_split_into_words=True
(to lift the ambiguity with a batch of sequences).add_special_tokens (
bool
, optional, defaults toTrue
) – Whether or not to encode the sequences with the special tokens relative to their model.padding (
bool
,str
orPaddingStrategy
, optional, defaults toFalse
) –Activates and controls padding. Accepts the following values:
True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (
bool
,str
orTruncationStrategy
, optional, defaults toFalse
) –Activates and controls truncation. Accepts the following values:
True
or'longest_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.'only_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.'only_second'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.False
or'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (
int
, optional) –Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to
None
, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.stride (
int
, optional, defaults to 0) – If set to a number along withmax_length
, the overflowing tokens returned whenreturn_overflowing_tokens=True
will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.is_split_into_words (
bool
, optional, defaults toFalse
) – Whether or not the input is already pre-tokenized (e.g., split into words), in which case the tokenizer will skip the pre-tokenization step. This is useful for NER or token classification.pad_to_multiple_of (
int
, optional) – If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).return_tensors (
str
orTensorType
, optional) –If set, will return tensors instead of list of python integers. Acceptable values are:
'tf'
: Return TensorFlowtf.constant
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return Numpynp.ndarray
objects.
return_token_type_ids (
bool
, optional) –Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the
return_outputs
attribute.return_attention_mask (
bool
, optional) –Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the
return_outputs
attribute.return_overflowing_tokens (
bool
, optional, defaults toFalse
) – Whether or not to return overflowing token sequences.return_special_tokens_mask (
bool
, optional, defaults toFalse
) – Wheter or not to return special tokens mask information.return_offsets_mapping (
bool
, optional, defaults toFalse
) –Whether or not to return
(char_start, char_end)
for each token.This is only available on fast tokenizers inheriting from
PreTrainedTokenizerFast
, if using Python’s tokenizer, this method will raiseNotImplementedError
.return_length (
bool
, optional, defaults toFalse
) – Whether or not to return the lengths of the encoded inputs.verbose (
bool
, optional, defaults toTrue
) – Whether or not to print informations and warnings.**kwargs – passed to the
self.tokenize()
method
- Returns
A
BatchEncoding
with the following fields:input_ids – List of token ids to be fed to a model.
token_type_ids – List of token type ids to be fed to a model (when
return_token_type_ids=True
or if “token_type_ids” is inself.model_input_names
).attention_mask – List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True
or if “attention_mask” is inself.model_input_names
).overflowing_tokens – List of overflowing tokens sequences (when a
max_length
is specified andreturn_overflowing_tokens=True
).num_truncated_tokens – Number of tokens truncated (when a
max_length
is specified andreturn_overflowing_tokens=True
).special_tokens_mask – List of 0s and 1s, with 0 specifying added special tokens and 1 specifying regual sequence tokens (when
add_special_tokens=True
andreturn_special_tokens_mask=True
).length – The length of the inputs (when
return_length=True
)
- Return type
-
batch_decode
(sequences: List[List[int]], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True) → List[str][source]¶ Convert a list of lists of token ids into a list of strings by calling decode.
- Parameters
sequences (
List[List[int]]
) – List of tokenized input ids. Can be obtained using the__call__
method.skip_special_tokens (
bool
, optional, defaults toFalse
) – Whether or not to remove special tokens in the decoding.clean_up_tokenization_spaces (
bool
, optional, defaults toTrue
) – Whether or not to clean up the tokenization spaces.
- Returns
The list of decoded sentences.
- Return type
List[str]
-
batch_encode_plus
(batch_text_or_text_pairs: Union[List[str], List[Tuple[str, str]], List[List[str]], List[Tuple[List[str], List[str]]], List[List[int]], List[Tuple[List[int], List[int]]]], add_special_tokens: bool = True, padding: Union[bool, str, transformers.tokenization_utils_base.PaddingStrategy] = False, truncation: Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = False, max_length: Optional[int] = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, transformers.tokenization_utils_base.TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs) → transformers.tokenization_utils_base.BatchEncoding[source]¶ Tokenize and prepare for the model a list of sequences or a list of pairs of sequences.
Warning
This method is deprecated,
__call__
should be used instead.- Parameters
batch_text_or_text_pairs (
List[str]
,List[Tuple[str, str]]
,List[List[str]]
,List[Tuple[List[str], List[str]]]
, and for not-fast tokenizers, alsoList[List[int]]
,List[Tuple[List[int], List[int]]]
) – Batch of sequences or pair of sequences to be encoded. This can be a list of string/string-sequences/int-sequences or a list of pair of string/string-sequences/int-sequence (see details inencode_plus
).add_special_tokens (
bool
, optional, defaults toTrue
) – Whether or not to encode the sequences with the special tokens relative to their model.padding (
bool
,str
orPaddingStrategy
, optional, defaults toFalse
) –Activates and controls padding. Accepts the following values:
True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (
bool
,str
orTruncationStrategy
, optional, defaults toFalse
) –Activates and controls truncation. Accepts the following values:
True
or'longest_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.'only_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.'only_second'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.False
or'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (
int
, optional) –Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to
None
, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.stride (
int
, optional, defaults to 0) – If set to a number along withmax_length
, the overflowing tokens returned whenreturn_overflowing_tokens=True
will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.is_split_into_words (
bool
, optional, defaults toFalse
) – Whether or not the input is already pre-tokenized (e.g., split into words), in which case the tokenizer will skip the pre-tokenization step. This is useful for NER or token classification.pad_to_multiple_of (
int
, optional) – If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).return_tensors (
str
orTensorType
, optional) –If set, will return tensors instead of list of python integers. Acceptable values are:
'tf'
: Return TensorFlowtf.constant
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return Numpynp.ndarray
objects.
return_token_type_ids (
bool
, optional) –Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the
return_outputs
attribute.return_attention_mask (
bool
, optional) –Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the
return_outputs
attribute.return_overflowing_tokens (
bool
, optional, defaults toFalse
) – Whether or not to return overflowing token sequences.return_special_tokens_mask (
bool
, optional, defaults toFalse
) – Wheter or not to return special tokens mask information.return_offsets_mapping (
bool
, optional, defaults toFalse
) –Whether or not to return
(char_start, char_end)
for each token.This is only available on fast tokenizers inheriting from
PreTrainedTokenizerFast
, if using Python’s tokenizer, this method will raiseNotImplementedError
.return_length (
bool
, optional, defaults toFalse
) – Whether or not to return the lengths of the encoded inputs.verbose (
bool
, optional, defaults toTrue
) – Whether or not to print informations and warnings.**kwargs – passed to the
self.tokenize()
method
- Returns
A
BatchEncoding
with the following fields:input_ids – List of token ids to be fed to a model.
token_type_ids – List of token type ids to be fed to a model (when
return_token_type_ids=True
or if “token_type_ids” is inself.model_input_names
).attention_mask – List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True
or if “attention_mask” is inself.model_input_names
).overflowing_tokens – List of overflowing tokens sequences (when a
max_length
is specified andreturn_overflowing_tokens=True
).num_truncated_tokens – Number of tokens truncated (when a
max_length
is specified andreturn_overflowing_tokens=True
).special_tokens_mask – List of 0s and 1s, with 0 specifying added special tokens and 1 specifying regual sequence tokens (when
add_special_tokens=True
andreturn_special_tokens_mask=True
).length – The length of the inputs (when
return_length=True
)
- Return type
-
build_inputs_with_special_tokens
(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]¶ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.
This implementation does not add special tokens and this method should be overriden in a subclass.
- Parameters
token_ids_0 (
List[int]
) – The first tokenized sequence.token_ids_1 (
List[int]
, optional) – The second tokenized sequence.
- Returns
The model input with special tokens.
- Return type
List[int]
-
static
clean_up_tokenization
(out_string: str) → str[source]¶ Clean up a list of simple English tokenization artifacts like spaces before punctuations and abreviated forms.
- Parameters
out_string (
str
) – The text to clean up.- Returns
The cleaned-up string.
- Return type
str
-
create_token_type_ids_from_sequences
(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]¶ Create the token type IDs corresponding to the sequences passed. What are token type IDs?
Should be overriden in a subclass if the model has a special way of building those.
- Parameters
token_ids_0 (
List[int]
) – The first tokenized sequence.token_ids_1 (
List[int]
, optional) – The second tokenized sequence.
- Returns
The token type ids.
- Return type
List[int]
-
decode
(token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True) → str[source]¶ Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.
Similar to doing
self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))
.- Parameters
token_ids (
List[int]
) – List of tokenized input ids. Can be obtained using the__call__
method.skip_special_tokens (
bool
, optional, defaults toFalse
) – Whether or not to remove special tokens in the decoding.clean_up_tokenization_spaces (
bool
, optional, defaults toTrue
) – Whether or not to clean up the tokenization spaces.
- Returns
The decoded sentence.
- Return type
str
-
encode
(text: Union[str, List[str], List[int]], text_pair: Optional[Union[str, List[str], List[int]]] = None, add_special_tokens: bool = True, padding: Union[bool, str, transformers.tokenization_utils_base.PaddingStrategy] = False, truncation: Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = False, max_length: Optional[int] = None, stride: int = 0, return_tensors: Optional[Union[str, transformers.tokenization_utils_base.TensorType]] = None, **kwargs) → List[int][source]¶ Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.
Same as doing
self.convert_tokens_to_ids(self.tokenize(text))
.- Parameters
text (
str
,List[str]
orList[int]
) – The first sequence to be encoded. This can be a string, a list of strings (tokenized string using thetokenize
method) or a list of integers (tokenized string ids using theconvert_tokens_to_ids
method).text_pair (
str
,List[str]
orList[int]
, optional) – Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using thetokenize
method) or a list of integers (tokenized string ids using theconvert_tokens_to_ids
method).add_special_tokens (
bool
, optional, defaults toTrue
) – Whether or not to encode the sequences with the special tokens relative to their model.padding (
bool
,str
orPaddingStrategy
, optional, defaults toFalse
) –Activates and controls padding. Accepts the following values:
True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (
bool
,str
orTruncationStrategy
, optional, defaults toFalse
) –Activates and controls truncation. Accepts the following values:
True
or'longest_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.'only_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.'only_second'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.False
or'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (
int
, optional) –Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to
None
, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.stride (
int
, optional, defaults to 0) – If set to a number along withmax_length
, the overflowing tokens returned whenreturn_overflowing_tokens=True
will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.is_split_into_words (
bool
, optional, defaults toFalse
) – Whether or not the input is already pre-tokenized (e.g., split into words), in which case the tokenizer will skip the pre-tokenization step. This is useful for NER or token classification.pad_to_multiple_of (
int
, optional) – If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).return_tensors (
str
orTensorType
, optional) –If set, will return tensors instead of list of python integers. Acceptable values are:
'tf'
: Return TensorFlowtf.constant
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return Numpynp.ndarray
objects.
**kwargs – Passed along to the .tokenize() method.
- Returns
The tokenized ids of the text.
- Return type
List[int]
,torch.Tensor
,tf.Tensor
ornp.ndarray
-
encode_plus
(text: Union[str, List[str], List[int]], text_pair: Optional[Union[str, List[str], List[int]]] = None, add_special_tokens: bool = True, padding: Union[bool, str, transformers.tokenization_utils_base.PaddingStrategy] = False, truncation: Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = False, max_length: Optional[int] = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, transformers.tokenization_utils_base.TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs) → transformers.tokenization_utils_base.BatchEncoding[source]¶ Tokenize and prepare for the model a sequence or a pair of sequences.
Warning
This method is deprecated,
__call__
should be used instead.- Parameters
text (
str
,List[str]
orList[int]
(the latter only for not-fast tokenizers)) – The first sequence to be encoded. This can be a string, a list of strings (tokenized string using thetokenize
method) or a list of integers (tokenized string ids using theconvert_tokens_to_ids
method).text_pair (
str
,List[str]
orList[int]
, optional) – Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using thetokenize
method) or a list of integers (tokenized string ids using theconvert_tokens_to_ids
method).add_special_tokens (
bool
, optional, defaults toTrue
) – Whether or not to encode the sequences with the special tokens relative to their model.padding (
bool
,str
orPaddingStrategy
, optional, defaults toFalse
) –Activates and controls padding. Accepts the following values:
True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (
bool
,str
orTruncationStrategy
, optional, defaults toFalse
) –Activates and controls truncation. Accepts the following values:
True
or'longest_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.'only_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.'only_second'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.False
or'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (
int
, optional) –Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to
None
, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.stride (
int
, optional, defaults to 0) – If set to a number along withmax_length
, the overflowing tokens returned whenreturn_overflowing_tokens=True
will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.is_split_into_words (
bool
, optional, defaults toFalse
) – Whether or not the input is already pre-tokenized (e.g., split into words), in which case the tokenizer will skip the pre-tokenization step. This is useful for NER or token classification.pad_to_multiple_of (
int
, optional) – If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).return_tensors (
str
orTensorType
, optional) –If set, will return tensors instead of list of python integers. Acceptable values are:
'tf'
: Return TensorFlowtf.constant
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return Numpynp.ndarray
objects.
return_token_type_ids (
bool
, optional) –Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the
return_outputs
attribute.return_attention_mask (
bool
, optional) –Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the
return_outputs
attribute.return_overflowing_tokens (
bool
, optional, defaults toFalse
) – Whether or not to return overflowing token sequences.return_special_tokens_mask (
bool
, optional, defaults toFalse
) – Wheter or not to return special tokens mask information.return_offsets_mapping (
bool
, optional, defaults toFalse
) –Whether or not to return
(char_start, char_end)
for each token.This is only available on fast tokenizers inheriting from
PreTrainedTokenizerFast
, if using Python’s tokenizer, this method will raiseNotImplementedError
.return_length (
bool
, optional, defaults toFalse
) – Whether or not to return the lengths of the encoded inputs.verbose (
bool
, optional, defaults toTrue
) – Whether or not to print informations and warnings.**kwargs – passed to the
self.tokenize()
method
- Returns
A
BatchEncoding
with the following fields:input_ids – List of token ids to be fed to a model.
token_type_ids – List of token type ids to be fed to a model (when
return_token_type_ids=True
or if “token_type_ids” is inself.model_input_names
).attention_mask – List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True
or if “attention_mask” is inself.model_input_names
).overflowing_tokens – List of overflowing tokens sequences (when a
max_length
is specified andreturn_overflowing_tokens=True
).num_truncated_tokens – Number of tokens truncated (when a
max_length
is specified andreturn_overflowing_tokens=True
).special_tokens_mask – List of 0s and 1s, with 0 specifying added special tokens and 1 specifying regual sequence tokens (when
add_special_tokens=True
andreturn_special_tokens_mask=True
).length – The length of the inputs (when
return_length=True
)
- Return type
-
classmethod
from_pretrained
(*inputs, **kwargs)[source]¶ Instantiate a
PreTrainedTokenizerBase
(or a derived class) from a predefined tokenizer.- Parameters
pretrained_model_name_or_path (
str
) –Can be either:
A string with the shortcut name of a predefined tokenizer to load from cache or download, e.g.,
bert-base-uncased
.A string with the identifier name of a predefined tokenizer that was user-uploaded to our S3, e.g.,
dbmdz/bert-base-german-cased
.A path to a directory containing vocabulary files required by the tokenizer, for instance saved using the
save_pretrained()
method, e.g.,./my_model_directory/
.(Deprecated, not applicable to all derived classes) A path or url to a single saved vocabulary file (if and only if the tokenizer only requires a single vocabulary file like Bert or XLNet), e.g.,
./my_model_directory/vocab.txt
.
cache_dir (
str
, optional) – Path to a directory in which a downloaded predefined tokenizer vocabulary files should be cached if the standard cache should not be used.force_download (
bool
, optional, defaults toFalse
) – Whether or not to force the (re-)download the vocabulary files and override the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) – Whether or not to delete incompletely received files. Attempt to resume the download if such a file exists.proxies (
Dict[str, str], `optional
) – A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.inputs (additional positional arguments, optional) – Will be passed along to the Tokenizer
__init__
method.kwargs (additional keyword arguments, optional) – Will be passed to the Tokenizer
__init__
method. Can be used to set special tokens likebos_token
,eos_token
,unk_token
,sep_token
,pad_token
,cls_token
,mask_token
,additional_special_tokens
. See parameters in the__init__
for more details.
Examples:
# We can't instantiate directly the base class `PreTrainedTokenizerBase` so let's show our examples on a derived class: BertTokenizer # Download vocabulary from S3 and cache. tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # Download vocabulary from S3 (user-uploaded) and cache. tokenizer = BertTokenizer.from_pretrained('dbmdz/bert-base-german-cased') # If vocabulary files are in a directory (e.g. tokenizer was saved using `save_pretrained('./test/saved_model/')`) tokenizer = BertTokenizer.from_pretrained('./test/saved_model/') # If the tokenizer uses a single vocabulary file, you can point directly to this file tokenizer = BertTokenizer.from_pretrained('./test/saved_model/my_vocab.txt') # You can link tokens to special vocabulary when instantiating tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', unk_token='<unk>') # You should be sure '<unk>' is in the vocabulary when doing that. # Otherwise use tokenizer.add_special_tokens({'unk_token': '<unk>'}) instead) assert tokenizer.unk_token == '<unk>'
-
get_special_tokens_mask
(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False) → List[int][source]¶ Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer
prepare_for_model
orencode_plus
methods.- Parameters
token_ids_0 (
List[int]
) – List of ids of the first sequence.token_ids_1 (
List[int]
, optional) – List of ids of the second sequence.already_has_special_tokens (
bool
, optional, defaults toFalse
) – Wheter or not the token list is already formated with special tokens for the model.
- Returns
1 for a special token, 0 for a sequence token.
- Return type
A list of integers in the range [0, 1]
-
property
max_len
¶ Deprecated Kept here for backward compatibility. Now renamed to
model_max_length
to avoid ambiguity.- Type
int
-
property
max_len_sentences_pair
¶ The maximum combined length of a pair of sentences that can be fed to the model.
- Type
int
-
property
max_len_single_sentence
¶ The maximum length of a sentence that can be fed to the model.
- Type
int
-
pad
(encoded_inputs: Union[transformers.tokenization_utils_base.BatchEncoding, List[transformers.tokenization_utils_base.BatchEncoding], Dict[str, List[int]], Dict[str, List[List[int]]], List[Dict[str, List[int]]]], padding: Union[bool, str, transformers.tokenization_utils_base.PaddingStrategy] = True, max_length: Optional[int] = None, pad_to_multiple_of: Optional[int] = None, return_attention_mask: Optional[bool] = None, return_tensors: Optional[Union[str, transformers.tokenization_utils_base.TensorType]] = None, verbose: bool = True) → transformers.tokenization_utils_base.BatchEncoding[source]¶ Pad a single encoded input or a batch of encoded inputs up to predefined length or to the max sequence length in the batch.
Padding side (left/right) padding token ids are defined at the tokenizer level (with
self.padding_side
,self.pad_token_id
andself.pad_token_type_id
)Note
If the
encoded_inputs
passed are dictionary of numpy arrays, PyTorch tensors or TensorFlow tensors, the result will use the same type unless you provide a different tensor type withreturn_tensors
. In the case of PyTorch tensors, you will lose the specific device of your tensors however.- Parameters
encoded_inputs (
BatchEncoding
, list ofBatchEncoding
,Dict[str, List[int]]
,Dict[str, List[List[int]]
orList[Dict[str, List[int]]]
) –Tokenized inputs. Can represent one input (
BatchEncoding
orDict[str, List[int]]
) or a batch of tokenized inputs (list ofBatchEncoding
, Dict[str, List[List[int]]] or List[Dict[str, List[int]]]) so you can use this method during preprocessing as well as in a PyTorch Dataloader collate function.Instead of
List[int]
you can have tensors (numpy arrays, PyTorch tensors or TensorFlow tensors), see the note above for the return type.padding (
bool
,str
orPaddingStrategy
, optional, defaults toFalse
) –- Select a strategy to pad the returned sequences (according to the model’s padding side and padding
index) among:
True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
max_length (
int
, optional) – Maximum length of the returned list and optionally padding length (see above).pad_to_multiple_of (
int
, optional) –If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_attention_mask (
bool
, optional) –Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the
return_outputs
attribute.return_tensors (
str
orTensorType
, optional) –If set, will return tensors instead of list of python integers. Acceptable values are:
'tf'
: Return TensorFlowtf.constant
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return Numpynp.ndarray
objects.
verbose (
bool
, optional, defaults toTrue
) – Whether or not to print informations and warnings.
-
prepare_for_model
(ids: List[int], pair_ids: Optional[List[int]] = None, add_special_tokens: bool = True, padding: Union[bool, str, transformers.tokenization_utils_base.PaddingStrategy] = False, truncation: Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = False, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, transformers.tokenization_utils_base.TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, prepend_batch_axis: bool = False, **kwargs) → transformers.tokenization_utils_base.BatchEncoding[source]¶ Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and manages a moving window (with user defined stride) for overflowing tokens
- Parameters
ids (
List[int]
) – Tokenized input ids of the first sequence. Can be obtained from a string by chaining thetokenize
andconvert_tokens_to_ids
methods.pair_ids (
List[int]
, optional) – Tokenized input ids of the second sequence. Can be obtained from a string by chaining thetokenize
andconvert_tokens_to_ids
methods.add_special_tokens (
bool
, optional, defaults toTrue
) – Whether or not to encode the sequences with the special tokens relative to their model.padding (
bool
,str
orPaddingStrategy
, optional, defaults toFalse
) –Activates and controls padding. Accepts the following values:
True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (
bool
,str
orTruncationStrategy
, optional, defaults toFalse
) –Activates and controls truncation. Accepts the following values:
True
or'longest_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.'only_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.'only_second'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.False
or'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (
int
, optional) –Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to
None
, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.stride (
int
, optional, defaults to 0) – If set to a number along withmax_length
, the overflowing tokens returned whenreturn_overflowing_tokens=True
will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.is_split_into_words (
bool
, optional, defaults toFalse
) – Whether or not the input is already pre-tokenized (e.g., split into words), in which case the tokenizer will skip the pre-tokenization step. This is useful for NER or token classification.pad_to_multiple_of (
int
, optional) – If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).return_tensors (
str
orTensorType
, optional) –If set, will return tensors instead of list of python integers. Acceptable values are:
'tf'
: Return TensorFlowtf.constant
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return Numpynp.ndarray
objects.
return_token_type_ids (
bool
, optional) –Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the
return_outputs
attribute.return_attention_mask (
bool
, optional) –Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the
return_outputs
attribute.return_overflowing_tokens (
bool
, optional, defaults toFalse
) – Whether or not to return overflowing token sequences.return_special_tokens_mask (
bool
, optional, defaults toFalse
) – Wheter or not to return special tokens mask information.return_offsets_mapping (
bool
, optional, defaults toFalse
) –Whether or not to return
(char_start, char_end)
for each token.This is only available on fast tokenizers inheriting from
PreTrainedTokenizerFast
, if using Python’s tokenizer, this method will raiseNotImplementedError
.return_length (
bool
, optional, defaults toFalse
) – Whether or not to return the lengths of the encoded inputs.verbose (
bool
, optional, defaults toTrue
) – Whether or not to print informations and warnings.**kwargs – passed to the
self.tokenize()
method
- Returns
A
BatchEncoding
with the following fields:input_ids – List of token ids to be fed to a model.
token_type_ids – List of token type ids to be fed to a model (when
return_token_type_ids=True
or if “token_type_ids” is inself.model_input_names
).attention_mask – List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True
or if “attention_mask” is inself.model_input_names
).overflowing_tokens – List of overflowing tokens sequences (when a
max_length
is specified andreturn_overflowing_tokens=True
).num_truncated_tokens – Number of tokens truncated (when a
max_length
is specified andreturn_overflowing_tokens=True
).special_tokens_mask – List of 0s and 1s, with 0 specifying added special tokens and 1 specifying regual sequence tokens (when
add_special_tokens=True
andreturn_special_tokens_mask=True
).length – The length of the inputs (when
return_length=True
)
- Return type
-
save_pretrained
(save_directory: str) → Tuple[str][source]¶ Save the tokenizer vocabulary files together with:
added tokens,
special tokens to class attributes mapping,
tokenizer instantiation positional and keywords inputs (e.g. do_lower_case for Bert).
This method make sure the full tokenizer can then be re-loaded using the
from_pretrained()
class method.Warning
This won’t save modifications you may have applied to the tokenizer after the instantiation (for instance, modifying
tokenizer.do_lower_case
after creation).- Parameters
save_directory (
str
) – The path to adirectory where the tokenizer will be saved.- Returns
The files saved.
- Return type
A tuple of
str
-
truncate_sequences
(ids: List[int], pair_ids: Optional[List[int]] = None, num_tokens_to_remove: int = 0, truncation_strategy: Union[str, transformers.tokenization_utils_base.TruncationStrategy] = 'longest_first', stride: int = 0) → Tuple[List[int], List[int], List[int]][source]¶ Truncates a sequence pair in-place following the strategy.
- Parameters
ids (
List[int]
) – Tokenized input ids of the first sequence. Can be obtained from a string by chaining thetokenize
andconvert_tokens_to_ids
methods.pair_ids (
List[int]
, optional) – Tokenized input ids of the second sequence. Can be obtained from a string by chaining thetokenize
andconvert_tokens_to_ids
methods.num_tokens_to_remove (
int
, optional, defaults to 0) – Number of tokens to remove using the truncation strategy.truncation (
str
orTruncationStrategy
, optional, defaults toFalse
) –The strategy to follow for truncation. Can be:
'longest_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.'only_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.'only_second'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (
int
, optional) –Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to
None
, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.stride (
int
, optional, defaults to 0) – If set to a positive number, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.
- Returns
The truncated
ids
, the truncatedpair_ids
and the list of overflowing tokens.- Return type
Tuple[List[int], List[int], List[int]]
SpecialTokensMixin
¶
-
class
transformers.tokenization_utils_base.
SpecialTokensMixin
(verbose=True, **kwargs)[source]¶ A mixin derived by
PreTrainedTokenizer
andPreTrainedTokenizerFast
to handle specific behaviors related to special tokens. In particular, this class hold the attributes which can be used to directly access these special tokens in a model-independant manner and allow to set and update the special tokens.- Parameters
bos_token (
str
ortokenizers.AddedToken
, optional) – A special token representing the beginning of a sentence.eos_token (
str
ortokenizers.AddedToken
, optional) – A special token representing the end of a sentence.unk_token (
str
ortokenizers.AddedToken
, optional) – A special token representing an out-of-vocabulary token.sep_token (
str
ortokenizers.AddedToken
, optional) – A special token separating two different sentences in the same input (used by BERT for instance).pad_token (
str
ortokenizers.AddedToken
, optional) – A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation.cls_token (
str
ortokenizers.AddedToken
, optional) – A special token representing the class of the input (used by BERT for instance).mask_token (
str
ortokenizers.AddedToken
, optional) – A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT).additional_special_tokens (tuple or list of
str
ortokenizers.AddedToken
, optional) – A tuple or a list of additional special tokens.
-
add_special_tokens
(special_tokens_dict: Dict[str, Union[str, tokenizers.AddedToken]]) → int[source]¶ Add a dictionary of special tokens (eos, pad, cls, etc.) to the encoder and link them to class attributes. If special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the current vocabulary).
Using : obj:add_special_tokens will ensure your special tokens can be used in several ways:
Special tokens are carefully handled by the tokenizer (they are never split).
You can easily refer to special tokens using tokenizer class attributes like
tokenizer.cls_token
. This makes it easy to develop model-agnostic training and fine-tuning scripts.
When possible, special tokens are already registered for provided pretrained models (for instance
BertTokenizer
cls_token
is already registered to be :obj`’[CLS]’` and XLM’s one is also registered to be'</s>'
).- Parameters
special_tokens_dict (dictionary str to str or
tokenizers.AddedToken
) –Keys should be in the list of predefined special attributes: [
bos_token
,eos_token
,unk_token
,sep_token
,pad_token
,cls_token
,mask_token
,additional_special_tokens
].Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer assign the index of the
unk_token
to them).- Returns
Number of tokens added to the vocabulary.
- Return type
int
Examples:
# Let's see how to add a new classification token to GPT-2 tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2Model.from_pretrained('gpt2') special_tokens_dict = {'cls_token': '<CLS>'} num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) print('We have added', num_added_toks, 'tokens') # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e., the length of the tokenizer. model.resize_token_embeddings(len(tokenizer)) assert tokenizer.cls_token == '<CLS>'
-
add_tokens
(new_tokens: Union[str, tokenizers.AddedToken, List[Union[str, tokenizers.AddedToken]]], special_tokens: bool = False) → int[source]¶ Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary.
- Parameters
new_tokens (
str
,tokenizers.AddedToken
or a list of str ortokenizers.AddedToken
) – Tokens are only added if they are not already in the vocabulary.tokenizers.AddedToken
wraps a string token to let you personalize its behavior: whether this token should only match against a single word, whether this token should strip all potential whitespaces on the left side, whether this token should strip all potential whitespaces on the right side, etc.special_token (
bool
, optional, defaults toFalse
) –Can be used to specify if the token is a special token. This mostly change the normalization behavior (special tokens like CLS or [MASK] are usually not lower-cased for instance).
See details for
tokenizers.AddedToken
in HuggingFace tokenizers library.
- Returns
Number of tokens added to the vocabulary.
- Return type
int
Examples:
# Let's see how to increase the vocabulary of Bert model and tokenizer tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained('bert-base-uncased') num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2']) print('We have added', num_added_toks, 'tokens') # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e., the length of the tokenizer. model.resize_token_embeddings(len(tokenizer))
-
property
additional_special_tokens
¶ All the additional special tokens you may want to use. Log an error if used while not having been set.
- Type
List[str]
-
property
additional_special_tokens_ids
¶ Ids of all the additional special tokens in the vocabulary. Log an error if used while not having been set.
- Type
List[int]
-
property
all_special_ids
¶ List the ids of the special tokens(
'<unk>'
,'<cls>'
, etc.) mapped to class attributes.- Type
List[int]
-
property
all_special_tokens
¶ All the special tokens (
'<unk>'
,'<cls>'
, etc.) mapped to class attributes.Convert tokens of
tokenizers.AddedToken
type to string.- Type
List[str]
-
property
all_special_tokens_extended
¶ All the special tokens (
'<unk>'
,'<cls>'
, etc.) mapped to class attributes.Don’t convert tokens of
tokenizers.AddedToken
type to string so they can be used to control more finely how special tokens are tokenized.- Type
List[Union[str, tokenizers.AddedToken]]
-
property
bos_token
¶ Beginning of sentence token. Log an error if used while not having been set.
- Type
str
-
property
bos_token_id
¶ Id of the beginning of sentence token in the vocabulary. Returns
None
if the token has not been set.- Type
Optional[int]
-
property
cls_token
¶ Classification token, to extract a summary of an input sequence leveraging self-attention along the full depth of the model. Log an error if used while not having been set.
- Type
str
-
property
cls_token_id
¶ Id of the classification token in the vocabulary, to extract a summary of an input sequence leveraging self-attention along the full depth of the model.
Returns
None
if the token has not been set.- Type
Optional[int]
-
property
eos_token
¶ End of sentence token. Log an error if used while not having been set.
- Type
str
-
property
eos_token_id
¶ Id of the end of sentence token in the vocabulary. Returns
None
if the token has not been set.- Type
Optional[int]
-
property
mask_token
¶ Mask token, to use when training a model with masked-language modeling. Log an error if used while not having been set.
- Type
str
-
property
mask_token_id
¶ Id of the mask token in the vocabulary, used when training a model with masked-language modeling. Returns
None
if the token has not been set.- Type
Optional[int]
-
property
pad_token
¶ Padding token. Log an error if used while not having been set.
- Type
str
-
property
pad_token_id
¶ Id of the padding token in the vocabulary. Returns
None
if the token has not been set.- Type
Optional[int]
-
property
pad_token_type_id
¶ Id of the padding token type in the vocabulary.
- Type
int
-
sanitize_special_tokens
() → int[source]¶ Make sure that all the special tokens attributes of the tokenizer (
tokenizer.mask_token
,tokenizer.cls_token
, etc.) are in the vocabulary.Add the missing ones to the vocabulary if needed.
- Returns
The number of tokens added in the vocaulary during the operation.
- Return type
int
-
property
sep_token
¶ Separation token, to separate context and query in an input sequence. Log an error if used while not having been set.
- Type
str
-
property
sep_token_id
¶ Id of the separation token in the vocabulary, to separate context and query in an input sequence. Returns
None
if the token has not been set.- Type
Optional[int]
-
property
special_tokens_map
¶ A dictionary mapping special token class attributes (
cls_token
,unk_token
, etc.) to their values ('<unk>'
,'<cls>'
, etc.).Convert potential tokens of
tokenizers.AddedToken
type to string.- Type
Dict[str, Union[str, List[str]]]
-
property
special_tokens_map_extended
¶ A dictionary mapping special token class attributes (
cls_token
,unk_token
, etc.) to their values ('<unk>'
,'<cls>'
, etc.).Don’t convert tokens of
tokenizers.AddedToken
type to string so they can be used to control more finely how special tokens are tokenized.- Type
Dict[str, Union[str, tokenizers.AddedToken, List[Union[str, tokenizers.AddedToken]]]]
-
property
unk_token
¶ Unknown token. Log an error if used while not having been set.
- Type
str
-
property
unk_token_id
¶ Id of the unknown token in the vocabulary. Returns
None
if the token has not been set.- Type
Optional[int]
Enums and namedtuples¶
-
class
transformers.tokenization_utils_base.
ExplicitEnum
(value)[source]¶ Enum with more explicit error message for missing values.
-
class
transformers.tokenization_utils_base.
PaddingStrategy
(value)[source]¶ Possible values for the
padding
argument inPreTrainedTokenizerBase.__call__()
. Useful for tab-completion in an IDE.
-
class
transformers.tokenization_utils_base.
TensorType
(value)[source]¶ Possible values for the
return_tensors
argument inPreTrainedTokenizerBase.__call__()
. Useful for tab-completion in an IDE.
-
class
transformers.tokenization_utils_base.
TruncationStrategy
(value)[source]¶ Possible values for the
truncation
argument inPreTrainedTokenizerBase.__call__()
. Useful for tab-completion in an IDE.