Tokenizer

A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most of the tokenizers are available in two flavors: a full python implementation and a “Fast” implementation based on the Rust library tokenizers. The “Fast” implementations allows:

  1. a significant speed-up in particular when doing batched tokenization and

  2. additional methods to map between the original string (character and words) and the token space (e.g. getting the index of the token comprising a given character or the span of characters corresponding to a given token). Currently no “Fast” implementation is available for the SentencePiece-based tokenizers (for T5, ALBERT, CamemBERT, XLMRoBERTa and XLNet models).

The base classes PreTrainedTokenizer and PreTrainedTokenizerFast implement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and “Fast” tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library (downloaded from HuggingFace’s AWS S3 repository). They both rely on PreTrainedTokenizerBase that contains the common methods, and SpecialTokensMixin.

PreTrainedTokenizer and PreTrainedTokenizerFast thus implement the main methods for using all the tokenizers:

  • Tokenizing (splitting strings in sub-word token strings), converting tokens strings to ids and back, and encoding/decoding (i.e., tokenizing and converting to integers).

  • Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece…).

  • Managing special tokens (like mask, beginning-of-sentence, etc.): adding them, assigning them to attributes in the tokenizer for easy access and making sure they are not split during tokenization.

BatchEncoding holds the output of the tokenizer’s encoding methods (__call__, encode_plus and batch_encode_plus) and is derived from a Python dictionary. When the tokenizer is a pure python tokenizer, this class behaves just like a standard python dictionary and holds the various model inputs computed by these methods (input_ids, attention_mask…). When the tokenizer is a “Fast” tokenizer (i.e., backed by HuggingFace tokenizers library), this class provides in addition several advanced alignment methods which can be used to map between the original string (character and words) and the token space (e.g., getting the index of the token comprising a given character or the span of characters corresponding to a given token).

PreTrainedTokenizer

class transformers.PreTrainedTokenizer(**kwargs)[source]

Base class for all slow tokenizers.

Inherits from PreTrainedTokenizerBase.

Handle all the shared methods for tokenization and special tokens as well as methods downloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary.

This class also contain the added tokens in a unified way on top of all tokenizers so we don’t have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece…).

Class attributes (overridden by derived classes)

  • vocab_files_names (Dict[str, str]) – A dictionary with, as keys, the __init__ keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string).

  • pretrained_vocab_files_map (Dict[str, Dict[str, str]]) – A dictionary of dictionaries, with the high-level keys being the __init__ keyword name of each vocabulary file required by the model, the low-level being the short-cut-names of the pretrained models with, as associated values, the url to the associated pretrained vocabulary file.

  • max_model_input_sizes (Dict[str, Optional[int]]) – A dictionary with, as keys, the short-cut-names of the pretrained models, and as associated values, the maximum length of the sequence inputs of this model, or None if the model has no maximum input size.

  • pretrained_init_configuration (Dict[str, Dict[str, Any]]) – A dictionary with, as keys, the short-cut-names of the pretrained models, and as associated values, a dictionary of specific arguments to pass to the __init__ method of the tokenizer class for this pretrained model when loading the tokenizer with the from_pretrained() method.

  • model_input_names (List[str]) – A list of inputs expected in the forward pass of the model.

  • padding_side (str) – The default value for the side on which the model should have padding applied. Should be 'right' or 'left'.

Parameters
  • model_max_length (int, optional) – The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with from_pretrained(), this will be set to the value stored for the associated model in max_model_input_sizes (see above). If no value is provided, will default to VERY_LARGE_INTEGER (int(1e30)).

  • padding_side – (str, optional): The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.

  • model_input_names (List[string], optional) – The list of inputs accepted by the forward pass of the model (like "token_type_ids" or "attention_mask"). Default value is picked from the class attribute of the same name.

  • bos_token (str or tokenizers.AddedToken, optional) – A special token representing the beginning of a sentence. Will be associated to self.bos_token and self.bos_token_id.

  • eos_token (str or tokenizers.AddedToken, optional) – A special token representing the end of a sentence. Will be associated to self.eos_token and self.eos_token_id.

  • unk_token (str or tokenizers.AddedToken, optional) – A special token representing an out-of-vocabulary token. Will be associated to self.unk_token and self.unk_token_id.

  • sep_token (str or tokenizers.AddedToken, optional) – A special token separating two different sentences in the same input (used by BERT for instance). Will be associated to self.sep_token and self.sep_token_id.

  • pad_token (str or tokenizers.AddedToken, optional) – A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated to self.pad_token and self.pad_token_id.

  • cls_token (str or tokenizers.AddedToken, optional) – A special token representing the class of the input (used by BERT for instance). Will be associated to self.cls_token and self.cls_token_id.

  • mask_token (str or tokenizers.AddedToken, optional) – A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated to self.mask_token and self.mask_token_id.

  • additional_special_tokens (tuple or list of str or tokenizers.AddedToken, optional) – A tuple or a list of additional special tokens. Add them here to ensure they won’t be split by the tokenization process. Will be associated to self.additional_special_tokens and self.additional_special_tokens_ids.

convert_ids_to_tokens(ids: int, skip_special_tokens: bool = 'False') → str[source]
convert_ids_to_tokens(ids: List[int], skip_special_tokens: bool = 'False') → List[str]

Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens.

Parameters
  • ids (int or List[int]) – The token id (or token ids) to convert to tokens.

  • skip_special_tokens (bool, optional, defaults to False) – Whether or not to remove special tokens in the decoding.

Returns

The decoded token(s).

Return type

str or List[str]

convert_tokens_to_ids(tokens: Union[str, List[str]]) → Union[int, List[int]][source]

Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the vocabulary.

Parameters

tokens (str or List[str]) – One or several token(s) to convert to token id(s).

Returns

The token id or list of token ids.

Return type

int or List[int]

convert_tokens_to_string(tokens: List[str]) → str[source]

Converts a sequence of tokens in a single string. The most simple way to do it is " ".join(tokens) but we often want to remove sub-word tokenization artifacts at the same time.

Parameters

tokens (List[str]) – The token to join in a string.

Returns

The joined tokens.

Return type

str

get_added_vocab() → Dict[str, int][source]

Returns the added tokens in the vocabulary as a dictionary of token to index.

Returns

The added tokens.

Return type

Dict[str, int]

get_special_tokens_mask(token_ids_0: List, token_ids_1: Optional[List] = None, already_has_special_tokens: bool = False) → List[int][source]

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods.

Parameters
  • token_ids_0 (List[int]) – List of ids of the first sequence.

  • token_ids_1 (List[int], optional) – List of ids of the second sequence.

  • already_has_special_tokens (bool, optional, defaults to False) – Whether or not the token list is already formatted with special tokens for the model.

Returns

1 for a special token, 0 for a sequence token.

Return type

A list of integers in the range [0, 1]

num_special_tokens_to_add(pair: bool = False) → int[source]

Returns the number of added tokens when encoding a sequence with special tokens.

Note

This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.

Parameters

pair (bool, optional, defaults to False) – Whether the number of added tokens should be computed in the case of a sequence pair or a single sequence.

Returns

Number of special tokens added to sequences.

Return type

int

prepare_for_tokenization(text: str, is_split_into_words: bool = False, **kwargs) → Tuple[str, Dict[str, Any]][source]

Performs any necessary transformations before tokenization.

This method should pop the arguments from kwargs and return the remaining kwargs as well. We test the kwargs at the end of the encoding process to be sure all the arguments have been used.

Parameters
  • text (str) – The text to prepare.

  • is_split_into_words (bool, optional, defaults to False) – Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.

  • kwargs – Keyword arguments to use for the tokenization.

Returns

The prepared text and the unused kwargs.

Return type

Tuple[str, Dict[str, Any]]

tokenize(text: str, **kwargs) → List[str][source]

Converts a string in a sequence of tokens, using the tokenizer.

Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces). Takes care of added tokens.

Parameters
  • text (str) – The sequence to be encoded.

  • **kwargs (additional keyword arguments) – Passed along to the model-specific prepare_for_tokenization preprocessing method.

Returns

The list of tokens.

Return type

List[str]

property vocab_size

Size of the base vocabulary (without the added tokens).

Type

int

PreTrainedTokenizerFast

The PreTrainedTokenizerFast depend on the tokenizers library. The tokenizers obtained from the 🤗 tokenizers library can be loaded very simply into 🤗 transformers. Take a look at the Using tokenizers from 🤗 tokenizers page to understand how this is done.

class transformers.PreTrainedTokenizerFast(*args, **kwargs)[source]

Base class for all fast tokenizers (wrapping HuggingFace tokenizers library).

Inherits from PreTrainedTokenizerBase.

Handles all the shared methods for tokenization and special tokens, as well as methods for downloading/caching/loading pretrained tokenizers, as well as adding tokens to the vocabulary.

This class also contains the added tokens in a unified way on top of all tokenizers so we don’t have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece…).

Class attributes (overridden by derived classes)

  • vocab_files_names (Dict[str, str]) – A dictionary with, as keys, the __init__ keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string).

  • pretrained_vocab_files_map (Dict[str, Dict[str, str]]) – A dictionary of dictionaries, with the high-level keys being the __init__ keyword name of each vocabulary file required by the model, the low-level being the short-cut-names of the pretrained models with, as associated values, the url to the associated pretrained vocabulary file.

  • max_model_input_sizes (Dict[str, Optional[int]]) – A dictionary with, as keys, the short-cut-names of the pretrained models, and as associated values, the maximum length of the sequence inputs of this model, or None if the model has no maximum input size.

  • pretrained_init_configuration (Dict[str, Dict[str, Any]]) – A dictionary with, as keys, the short-cut-names of the pretrained models, and as associated values, a dictionary of specific arguments to pass to the __init__ method of the tokenizer class for this pretrained model when loading the tokenizer with the from_pretrained() method.

  • model_input_names (List[str]) – A list of inputs expected in the forward pass of the model.

  • padding_side (str) – The default value for the side on which the model should have padding applied. Should be 'right' or 'left'.

Parameters
  • model_max_length (int, optional) – The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with from_pretrained(), this will be set to the value stored for the associated model in max_model_input_sizes (see above). If no value is provided, will default to VERY_LARGE_INTEGER (int(1e30)).

  • padding_side – (str, optional): The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.

  • model_input_names (List[string], optional) – The list of inputs accepted by the forward pass of the model (like "token_type_ids" or "attention_mask"). Default value is picked from the class attribute of the same name.

  • bos_token (str or tokenizers.AddedToken, optional) – A special token representing the beginning of a sentence. Will be associated to self.bos_token and self.bos_token_id.

  • eos_token (str or tokenizers.AddedToken, optional) – A special token representing the end of a sentence. Will be associated to self.eos_token and self.eos_token_id.

  • unk_token (str or tokenizers.AddedToken, optional) – A special token representing an out-of-vocabulary token. Will be associated to self.unk_token and self.unk_token_id.

  • sep_token (str or tokenizers.AddedToken, optional) – A special token separating two different sentences in the same input (used by BERT for instance). Will be associated to self.sep_token and self.sep_token_id.

  • pad_token (str or tokenizers.AddedToken, optional) – A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated to self.pad_token and self.pad_token_id.

  • cls_token (str or tokenizers.AddedToken, optional) – A special token representing the class of the input (used by BERT for instance). Will be associated to self.cls_token and self.cls_token_id.

  • mask_token (str or tokenizers.AddedToken, optional) – A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated to self.mask_token and self.mask_token_id.

  • additional_special_tokens (tuple or list of str or tokenizers.AddedToken, optional) – A tuple or a list of additional special tokens. Add them here to ensure they won’t be split by the tokenization process. Will be associated to self.additional_special_tokens and self.additional_special_tokens_ids.

  • tokenizer_object (tokenizers.Tokenizer) – A tokenizers.Tokenizer object from 🤗 tokenizers to instantiate from. See Using tokenizers from 🤗 tokenizers for more information.

  • tokenizer_file (str) – A path to a local JSON file representing a previously serialized tokenizers.Tokenizer object from 🤗 tokenizers.

property backend_tokenizer

The Rust tokenizer used as a backend.

Type

tokenizers.implementations.BaseTokenizer

convert_ids_to_tokens(ids: Union[int, List[int]], skip_special_tokens: bool = False) → Union[str, List[str]][source]

Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens.

Parameters
  • ids (int or List[int]) – The token id (or token ids) to convert to tokens.

  • skip_special_tokens (bool, optional, defaults to False) – Whether or not to remove special tokens in the decoding.

Returns

The decoded token(s).

Return type

str or List[str]

convert_tokens_to_ids(tokens: Union[str, List[str]]) → Union[int, List[int]][source]

Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the vocabulary.

Parameters

tokens (str or List[str]) – One or several token(s) to convert to token id(s).

Returns

The token id or list of token ids.

Return type

int or List[int]

convert_tokens_to_string(tokens: List[str]) → str[source]

Converts a sequence of tokens in a single string. The most simple way to do it is " ".join(tokens) but we often want to remove sub-word tokenization artifacts at the same time.

Parameters

tokens (List[str]) – The token to join in a string.

Returns

The joined tokens.

Return type

str

property decoder

The Rust decoder for this tokenizer.

Type

tokenizers.decoders.Decoder

get_added_vocab() → Dict[str, int][source]

Returns the added tokens in the vocabulary as a dictionary of token to index.

Returns

The added tokens.

Return type

Dict[str, int]

get_vocab() → Dict[str, int][source]

Returns the vocabulary as a dictionary of token to index.

tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.

Returns

The vocabulary.

Return type

Dict[str, int]

num_special_tokens_to_add(pair: bool = False) → int[source]

Returns the number of added tokens when encoding a sequence with special tokens.

Note

This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.

Parameters

pair (bool, optional, defaults to False) – Whether the number of added tokens should be computed in the case of a sequence pair or a single sequence.

Returns

Number of special tokens added to sequences.

Return type

int

set_truncation_and_padding(padding_strategy: transformers.file_utils.PaddingStrategy, truncation_strategy: transformers.tokenization_utils_base.TruncationStrategy, max_length: int, stride: int, pad_to_multiple_of: Optional[int])[source]

Define the truncation and the padding strategies for fast tokenizers (provided by HuggingFace tokenizers library) and restore the tokenizer settings afterwards.

The provided tokenizer has no padding / truncation strategy before the managed section. If your tokenizer set a padding / truncation strategy before, then it will be reset to no padding / truncation when exiting the managed section.

Parameters
  • padding_strategy (PaddingStrategy) – The kind of padding that will be applied to the input

  • truncation_strategy (TruncationStrategy) – The kind of truncation that will be applied to the input

  • max_length (int) – The maximum size of a sequence.

  • stride (int) – The stride to use when handling overflow.

  • pad_to_multiple_of (int, optional) – If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).

tokenize(text: str, pair: Optional[str] = None, add_special_tokens: bool = False, **kwargs) → List[str][source]

Converts a string in a sequence of tokens, replacing unknown tokens with the unk_token.

Parameters
  • text (str) – The sequence to be encoded.

  • pair (str, optional) – A second sequence to be encoded with the first.

  • add_special_tokens (bool, optional, defaults to False) – Whether or not to add the special tokens associated with the corresponding model.

  • kwargs (additional keyword arguments, optional) – Will be passed to the underlying model specific encode method. See details in __call__()

Returns

The list of tokens.

Return type

List[str]

train_new_from_iterator(text_iterator, vocab_size, new_special_tokens=None, special_tokens_map=None, **kwargs)[source]

Trains a tokenizer on a new corpus with the same defaults (in terms of special tokens or tokenization pipeline) as the current one.

Parameters
  • text_iterator (generator of List[str]) – The training corpus. Should be a generator of batches of texts, for instance a list of lists of texts if you have everything in memory.

  • vocab_size (int) – The size of the vocabulary you want for your tokenizer.

  • new_special_tokens (list of str or AddedToken, optional) – A list of new special tokens to add to the tokenizer you are training.

  • special_tokens_map (Dict[str, str], optional) – If you want to rename some of the special tokens this tokenizer uses, pass along a mapping old special token name to new special token name in this argument.

  • kwargs – Additional keyword arguments passed along to the trainer from the 🤗 Tokenizers library.

Returns

A new tokenizer of the same type as the original one, trained on text_iterator.

Return type

PreTrainedTokenizerFast

property vocab_size

Size of the base vocabulary (without the added tokens).

Type

int

BatchEncoding

class transformers.BatchEncoding(data: Optional[Dict[str, Any]] = None, encoding: Optional[Union[tokenizers.Encoding, Sequence[tokenizers.Encoding]]] = None, tensor_type: Union[None, str, transformers.file_utils.TensorType] = None, prepend_batch_axis: bool = False, n_sequences: Optional[int] = None)[source]

Holds the output of the encode_plus() and batch_encode() methods (tokens, attention_masks, etc).

This class is derived from a python dictionary and can be used as a dictionary. In addition, this class exposes utility methods to map from word/character space to token space.

Parameters
  • data (dict) – Dictionary of lists/arrays/tensors returned by the encode/batch_encode methods (‘input_ids’, ‘attention_mask’, etc.).

  • encoding (tokenizers.Encoding or Sequence[tokenizers.Encoding], optional) – If the tokenizer is a fast tokenizer which outputs additional information like mapping from word/character space to token space the tokenizers.Encoding instance or list of instance (for batches) hold this information.

  • tensor_type (Union[None, str, TensorType], optional) – You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization.

  • prepend_batch_axis (bool, optional, defaults to False) – Whether or not to add a batch axis when converting to tensors (see tensor_type above).

  • n_sequences (Optional[int], optional) – You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization.

char_to_token(batch_or_char_index: int, char_index: Optional[int] = None, sequence_index: int = 0) → int[source]

Get the index of the token in the encoded output comprising a character in the original string for a sequence of the batch.

Can be called as:

  • self.char_to_token(char_index) if batch size is 1

  • self.char_to_token(batch_index, char_index) if batch size is greater or equal to 1

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

Parameters
  • batch_or_char_index (int) – Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the word in the sequence

  • char_index (int, optional) – If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence.

  • sequence_index (int, optional, defaults to 0) – If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided character index belongs to.

Returns

Index of the token.

Return type

int

char_to_word(batch_or_char_index: int, char_index: Optional[int] = None, sequence_index: int = 0) → int[source]

Get the word in the original string corresponding to a character in the original string of a sequence of the batch.

Can be called as:

  • self.char_to_word(char_index) if batch size is 1

  • self.char_to_word(batch_index, char_index) if batch size is greater than 1

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

Parameters
  • batch_or_char_index (int) – Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the character in the original string.

  • char_index (int, optional) – If a batch index is provided in batch_or_token_index, this can be the index of the character in the original string.

  • sequence_index (int, optional, defaults to 0) – If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided character index belongs to.

Returns

Index or indices of the associated encoded token(s).

Return type

int or List[int]

convert_to_tensors(tensor_type: Optional[Union[str, transformers.file_utils.TensorType]] = None, prepend_batch_axis: bool = False)[source]

Convert the inner content to tensors.

Parameters
  • tensor_type (str or TensorType, optional) – The type of tensors to use. If str, should be one of the values of the enum TensorType. If None, no modification is done.

  • prepend_batch_axis (int, optional, defaults to False) – Whether or not to add the batch dimension during the conversion.

property encodings

The list all encodings from the tokenization process. Returns None if the input was tokenized through Python (i.e., not a fast) tokenizer.

Type

Optional[List[tokenizers.Encoding]]

property is_fast

Indicate whether this BatchEncoding was generated from the result of a PreTrainedTokenizerFast or not.

Type

bool

items() → a set-like object providing a view on D’s items[source]
keys() → a set-like object providing a view on D’s keys[source]
property n_sequences

The number of sequences used to generate each sample from the batch encoded in this BatchEncoding. Currently can be one of None (unknown), 1 (a single sentence) or 2 (a pair of sentences)

Type

Optional[int]

sequence_ids(batch_index: int = 0) → List[Optional[int]][source]

Return a list mapping the tokens to the id of their original sentences:

  • None for special tokens added around or between sequences,

  • 0 for tokens corresponding to words in the first sequence,

  • 1 for tokens corresponding to words in the second sequence when a pair of sequences was jointly encoded.

Parameters

batch_index (int, optional, defaults to 0) – The index to access in the batch.

Returns

A list indicating the sequence id corresponding to each token. Special tokens added by the tokenizer are mapped to None and other tokens are mapped to the index of their corresponding sequence.

Return type

List[Optional[int]]

to(device: Union[str, torch.device])BatchEncoding[source]

Send all values to device by calling v.to(device) (PyTorch only).

Parameters

device (str or torch.device) – The device to put the tensors on.

Returns

The same instance after modification.

Return type

BatchEncoding

token_to_chars(batch_or_token_index: int, token_index: Optional[int] = None)transformers.tokenization_utils_base.CharSpan[source]

Get the character span corresponding to an encoded token in a sequence of the batch.

Character spans are returned as a CharSpan with:

  • start – Index of the first character in the original string associated to the token.

  • end – Index of the character following the last character in the original string associated to the token.

Can be called as:

  • self.token_to_chars(token_index) if batch size is 1

  • self.token_to_chars(batch_index, token_index) if batch size is greater or equal to 1

Parameters
  • batch_or_token_index (int) – Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the token in the sequence.

  • token_index (int, optional) – If a batch index is provided in batch_or_token_index, this can be the index of the token or tokens in the sequence.

Returns

Span of characters in the original string.

Return type

CharSpan

token_to_sequence(batch_or_token_index: int, token_index: Optional[int] = None) → int[source]

Get the index of the sequence represented by the given token. In the general use case, this method returns 0 for a single sequence or the first sequence of a pair, and 1 for the second sequence of a pair

Can be called as:

  • self.token_to_sequence(token_index) if batch size is 1

  • self.token_to_sequence(batch_index, token_index) if batch size is greater than 1

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e., words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

Parameters
  • batch_or_token_index (int) – Index of the sequence in the batch. If the batch only comprises one sequence, this can be the index of the token in the sequence.

  • token_index (int, optional) – If a batch index is provided in batch_or_token_index, this can be the index of the token in the sequence.

Returns

Index of the word in the input sequence.

Return type

int

token_to_word(batch_or_token_index: int, token_index: Optional[int] = None) → int[source]

Get the index of the word corresponding (i.e. comprising) to an encoded token in a sequence of the batch.

Can be called as:

  • self.token_to_word(token_index) if batch size is 1

  • self.token_to_word(batch_index, token_index) if batch size is greater than 1

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e., words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

Parameters
  • batch_or_token_index (int) – Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the token in the sequence.

  • token_index (int, optional) – If a batch index is provided in batch_or_token_index, this can be the index of the token in the sequence.

Returns

Index of the word in the input sequence.

Return type

int

tokens(batch_index: int = 0) → List[str][source]

Return the list of tokens (sub-parts of the input strings after word/subword splitting and before conversion to integer indices) at a given batch index (only works for the output of a fast tokenizer).

Parameters

batch_index (int, optional, defaults to 0) – The index to access in the batch.

Returns

The list of tokens at that index.

Return type

List[str]

values() → an object providing a view on D’s values[source]
word_ids(batch_index: int = 0) → List[Optional[int]][source]

Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.

Parameters

batch_index (int, optional, defaults to 0) – The index to access in the batch.

Returns

A list indicating the word corresponding to each token. Special tokens added by the tokenizer are mapped to None and other tokens are mapped to the index of their corresponding word (several tokens will be mapped to the same word index if they are parts of that word).

Return type

List[Optional[int]]

word_to_chars(batch_or_word_index: int, word_index: Optional[int] = None, sequence_index: int = 0)transformers.tokenization_utils_base.CharSpan[source]

Get the character span in the original string corresponding to given word in a sequence of the batch.

Character spans are returned as a CharSpan NamedTuple with:

  • start: index of the first character in the original string

  • end: index of the character following the last character in the original string

Can be called as:

  • self.word_to_chars(word_index) if batch size is 1

  • self.word_to_chars(batch_index, word_index) if batch size is greater or equal to 1

Parameters
  • batch_or_word_index (int) – Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the word in the sequence

  • word_index (int, optional) – If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence.

  • sequence_index (int, optional, defaults to 0) – If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided word index belongs to.

Returns

Span(s) of the associated character or characters in the string. CharSpan are NamedTuple with:

  • start: index of the first character associated to the token in the original string

  • end: index of the character following the last character associated to the token in the original string

Return type

CharSpan or List[CharSpan]

word_to_tokens(batch_or_word_index: int, word_index: Optional[int] = None, sequence_index: int = 0) → Optional[transformers.tokenization_utils_base.TokenSpan][source]

Get the encoded token span corresponding to a word in a sequence of the batch.

Token spans are returned as a TokenSpan with:

  • start – Index of the first token.

  • end – Index of the token following the last token.

Can be called as:

  • self.word_to_tokens(word_index, sequence_index: int = 0) if batch size is 1

  • self.word_to_tokens(batch_index, word_index, sequence_index: int = 0) if batch size is greater or equal to 1

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

Parameters
  • batch_or_word_index (int) – Index of the sequence in the batch. If the batch only comprises one sequence, this can be the index of the word in the sequence.

  • word_index (int, optional) – If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence.

  • sequence_index (int, optional, defaults to 0) – If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided word index belongs to.

Returns

Optional TokenSpan Span of tokens in the encoded sequence. Returns None if no tokens correspond to the word.

words(batch_index: int = 0) → List[Optional[int]][source]

Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.

Parameters

batch_index (int, optional, defaults to 0) – The index to access in the batch.

Returns

A list indicating the word corresponding to each token. Special tokens added by the tokenizer are mapped to None and other tokens are mapped to the index of their corresponding word (several tokens will be mapped to the same word index if they are parts of that word).

Return type

List[Optional[int]]