Tokenizer
トークナイザーは、モデルの入力の準備を担当します。ライブラリには、すべてのモデルのトークナイザーが含まれています。ほとんど トークナイザーの一部は、完全な Python 実装と、 Rust ライブラリ 🤗 Tokenizers。 「高速」実装では次のことが可能になります。
- 特にバッチトークン化を行う場合の大幅なスピードアップと
- 元の文字列 (文字と単語) とトークン空間の間でマッピングする追加のメソッド (例: 特定の文字を含むトークンのインデックス、または特定のトークンに対応する文字の範囲)。
基本クラス PreTrainedTokenizer および PreTrainedTokenizerFast モデル入力の文字列入力をエンコードし (以下を参照)、Python をインスタンス化/保存するための一般的なメソッドを実装します。 ローカル ファイルまたはディレクトリ、またはライブラリによって提供される事前トレーニング済みトークナイザーからの「高速」トークナイザー (HuggingFace の AWS S3 リポジトリからダウンロード)。二人とも頼りにしているのは、 共通メソッドを含む PreTrainedTokenizerBase SpecialTokensMixin。
したがって、PreTrainedTokenizer と PreTrainedTokenizerFast はメインを実装します。 すべてのトークナイザーを使用するためのメソッド:
- トークン化 (文字列をサブワード トークン文字列に分割)、トークン文字列を ID に変換したり、その逆の変換を行ったりします。 エンコード/デコード (つまり、トークン化と整数への変換)。
- 基礎となる構造 (BPE、SentencePiece…) から独立した方法で、語彙に新しいトークンを追加します。
- 特別なトークン (マスク、文の始まりなど) の管理: トークンの追加、属性への割り当て。 トークナイザーにより、簡単にアクセスでき、トークン化中に分割されないようにすることができます。
BatchEncoding は、
PreTrainedTokenizerBase のエンコード メソッド (__call__
、
encode_plus
および batch_encode_plus
) であり、Python 辞書から派生しています。トークナイザーが純粋な Python の場合
tokenizer の場合、このクラスは標準の Python 辞書と同じように動作し、によって計算されたさまざまなモデル入力を保持します。
これらのメソッド (input_ids
、attention_mask
…)。トークナイザーが「高速」トークナイザーである場合 (つまり、
HuggingFace トークナイザー ライブラリ)、このクラスはさらに提供します
元の文字列 (文字と単語) と
トークンスペース (例: 指定された文字または対応する文字の範囲を構成するトークンのインデックスの取得)
与えられたトークンに)。
PreTrainedTokenizer
class transformers.PreTrainedTokenizer
< source >( **kwargs )
Parameters
- model_max_length (
int
, optional) — The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with from_pretrained(), this will be set to the value stored for the associated model inmax_model_input_sizes
(see above). If no value is provided, will default to VERY_LARGE_INTEGER (int(1e30)
). - padding_side (
str
, optional) — The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name. - truncation_side (
str
, optional) — The side on which the model should have truncation applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name. - chat_template (
str
, optional) — A Jinja template string that will be used to format lists of chat messages. See https://huggingface.co/docs/transformers/chat_templating for a full description. - model_input_names (
List[string]
, optional) — The list of inputs accepted by the forward pass of the model (like"token_type_ids"
or"attention_mask"
). Default value is picked from the class attribute of the same name. - bos_token (
str
ortokenizers.AddedToken
, optional) — A special token representing the beginning of a sentence. Will be associated toself.bos_token
andself.bos_token_id
. - eos_token (
str
ortokenizers.AddedToken
, optional) — A special token representing the end of a sentence. Will be associated toself.eos_token
andself.eos_token_id
. - unk_token (
str
ortokenizers.AddedToken
, optional) — A special token representing an out-of-vocabulary token. Will be associated toself.unk_token
andself.unk_token_id
. - sep_token (
str
ortokenizers.AddedToken
, optional) — A special token separating two different sentences in the same input (used by BERT for instance). Will be associated toself.sep_token
andself.sep_token_id
. - pad_token (
str
ortokenizers.AddedToken
, optional) — A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated toself.pad_token
andself.pad_token_id
. - cls_token (
str
ortokenizers.AddedToken
, optional) — A special token representing the class of the input (used by BERT for instance). Will be associated toself.cls_token
andself.cls_token_id
. - mask_token (
str
ortokenizers.AddedToken
, optional) — A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated toself.mask_token
andself.mask_token_id
. - additional_special_tokens (tuple or list of
str
ortokenizers.AddedToken
, optional) — A tuple or a list of additional special tokens. Add them here to ensure they are skipped when decoding withskip_special_tokens
is set to True. If they are not part of the vocabulary, they will be added at the end of the vocabulary. - clean_up_tokenization_spaces (
bool
, optional, defaults toTrue
) — Whether or not the model should cleanup the spaces that were added when splitting the input text during the tokenization process. - split_special_tokens (
bool
, optional, defaults toFalse
) — Whether or not the special tokens should be split during the tokenization process. Passing will affect the internal state of the tokenizer. The default behavior is to not split special tokens. This means that if<s>
is thebos_token
, thentokenizer.tokenize("<s>") = ['<s>
]. Otherwise, ifsplit_special_tokens=True
, thentokenizer.tokenize("<s>")
will be give['<','s', '>']
.
Base class for all slow tokenizers.
Inherits from PreTrainedTokenizerBase.
Handle all the shared methods for tokenization and special tokens as well as methods downloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary.
This class also contain the added tokens in a unified way on top of all tokenizers so we don’t have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece…).
Class attributes (overridden by derived classes)
- vocab_files_names (
Dict[str, str]
) — A dictionary with, as keys, the__init__
keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string). - pretrained_vocab_files_map (
Dict[str, Dict[str, str]]
) — A dictionary of dictionaries, with the high-level keys being the__init__
keyword name of each vocabulary file required by the model, the low-level being theshort-cut-names
of the pretrained models with, as associated values, theurl
to the associated pretrained vocabulary file. - model_input_names (
List[str]
) — A list of inputs expected in the forward pass of the model. - padding_side (
str
) — The default value for the side on which the model should have padding applied. Should be'right'
or'left'
. - truncation_side (
str
) — The default value for the side on which the model should have truncation applied. Should be'right'
or'left'
.
__call__
< source >( text: Union = None text_pair: Union = None text_target: Union = None text_pair_target: Union = None add_special_tokens: bool = True padding: Union = False truncation: Union = None max_length: Optional = None stride: int = 0 is_split_into_words: bool = False pad_to_multiple_of: Optional = None padding_side: Optional = None return_tensors: Union = None return_token_type_ids: Optional = None return_attention_mask: Optional = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True **kwargs ) → BatchEncoding
Parameters
- text (
str
,List[str]
,List[List[str]]
, optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must setis_split_into_words=True
(to lift the ambiguity with a batch of sequences). - text_pair (
str
,List[str]
,List[List[str]]
, optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must setis_split_into_words=True
(to lift the ambiguity with a batch of sequences). - text_target (
str
,List[str]
,List[List[str]]
, optional) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must setis_split_into_words=True
(to lift the ambiguity with a batch of sequences). - text_pair_target (
str
,List[str]
,List[List[str]]
, optional) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must setis_split_into_words=True
(to lift the ambiguity with a batch of sequences). - add_special_tokens (
bool
, optional, defaults toTrue
) — Whether or not to add special tokens when encoding the sequences. This will use the underlyingPretrainedTokenizerBase.build_inputs_with_special_tokens
function, which defines which tokens are automatically added to the input ids. This is usefull if you want to addbos
oreos
tokens automatically. - padding (
bool
,str
or PaddingStrategy, optional, defaults toFalse
) — Activates and controls padding. Accepts the following values:True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
- truncation (
bool
,str
or TruncationStrategy, optional, defaults toFalse
) — Activates and controls truncation. Accepts the following values:True
or'longest_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.'only_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.'only_second'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.False
or'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
- max_length (
int
, optional) — Controls the maximum length to use by one of the truncation/padding parameters.If left unset or set to
None
, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated. - stride (
int
, optional, defaults to 0) — If set to a number along withmax_length
, the overflowing tokens returned whenreturn_overflowing_tokens=True
will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens. - is_split_into_words (
bool
, optional, defaults toFalse
) — Whether or not the input is already pre-tokenized (e.g., split into words). If set toTrue
, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification. - pad_to_multiple_of (
int
, optional) — If set will pad the sequence to a multiple of the provided value. Requirespadding
to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability>= 7.5
(Volta). - padding_side (
str
, optional) — The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name. - return_tensors (
str
or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:'tf'
: Return TensorFlowtf.constant
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return Numpynp.ndarray
objects.
- return_token_type_ids (
bool
, optional) — Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by thereturn_outputs
attribute. - return_attention_mask (
bool
, optional) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by thereturn_outputs
attribute. - return_overflowing_tokens (
bool
, optional, defaults toFalse
) — Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided withtruncation_strategy = longest_first
orTrue
, an error is raised instead of returning overflowing tokens. - return_special_tokens_mask (
bool
, optional, defaults toFalse
) — Whether or not to return special tokens mask information. - return_offsets_mapping (
bool
, optional, defaults toFalse
) — Whether or not to return(char_start, char_end)
for each token.This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using Python’s tokenizer, this method will raise
NotImplementedError
. - return_length (
bool
, optional, defaults toFalse
) — Whether or not to return the lengths of the encoded inputs. - verbose (
bool
, optional, defaults toTrue
) — Whether or not to print more information and warnings. **kwargs — passed to theself.tokenize()
method
Returns
A BatchEncoding with the following fields:
-
input_ids — List of token ids to be fed to a model.
-
token_type_ids — List of token type ids to be fed to a model (when
return_token_type_ids=True
or if “token_type_ids” is inself.model_input_names
). -
attention_mask — List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True
or if “attention_mask” is inself.model_input_names
). -
overflowing_tokens — List of overflowing tokens sequences (when a
max_length
is specified andreturn_overflowing_tokens=True
). -
num_truncated_tokens — Number of tokens truncated (when a
max_length
is specified andreturn_overflowing_tokens=True
). -
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when
add_special_tokens=True
andreturn_special_tokens_mask=True
). -
length — The length of the inputs (when
return_length=True
)
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.
apply_chat_template
< source >( conversation: Union tools: Optional = None documents: Optional = None chat_template: Optional = None add_generation_prompt: bool = False continue_final_message: bool = False tokenize: bool = True padding: bool = False truncation: bool = False max_length: Optional = None return_tensors: Union = None return_dict: bool = False return_assistant_tokens_mask: bool = False tokenizer_kwargs: Optional = None **kwargs ) → Union[List[int], Dict]
Parameters
- conversation (Union[List[Dict[str, str]], List[List[Dict[str, str]]]]) — A list of dicts with “role” and “content” keys, representing the chat history so far.
- tools (
List[Dict]
, optional) — A list of tools (callable functions) that will be accessible to the model. If the template does not support function calling, this argument will have no effect. Each tool should be passed as a JSON Schema, giving the name, description and argument types for the tool. See our chat templating guide for more information. - documents (
List[Dict[str, str]]
, optional) — A list of dicts representing documents that will be accessible to the model if it is performing RAG (retrieval-augmented generation). If the template does not support RAG, this argument will have no effect. We recommend that each document should be a dict containing “title” and “text” keys. Please see the RAG section of the chat templating guide for examples of passing documents with chat templates. - chat_template (
str
, optional) — A Jinja template to use for this conversion. It is usually not necessary to pass anything to this argument, as the model’s template will be used by default. - add_generation_prompt (bool, optional) — If this is set, a prompt with the token(s) that indicate the start of an assistant message will be appended to the formatted output. This is useful when you want to generate a response from the model. Note that this argument will be passed to the chat template, and so it must be supported in the template for this argument to have any effect.
- continue_final_message (bool, optional) —
If this is set, the chat will be formatted so that the final
message in the chat is open-ended, without any EOS tokens. The model will continue this message
rather than starting a new one. This allows you to “prefill” part of
the model’s response for it. Cannot be used at the same time as
add_generation_prompt
. - tokenize (
bool
, defaults toTrue
) — Whether to tokenize the output. IfFalse
, the output will be a string. - padding (
bool
, defaults toFalse
) — Whether to pad sequences to the maximum length. Has no effect if tokenize isFalse
. - truncation (
bool
, defaults toFalse
) — Whether to truncate sequences at the maximum length. Has no effect if tokenize isFalse
. - max_length (
int
, optional) — Maximum length (in tokens) to use for padding or truncation. Has no effect if tokenize isFalse
. If not specified, the tokenizer’smax_length
attribute will be used as a default. - return_tensors (
str
or TensorType, optional) — If set, will return tensors of a particular framework. Has no effect if tokenize isFalse
. Acceptable values are:'tf'
: Return TensorFlowtf.Tensor
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return NumPynp.ndarray
objects.'jax'
: Return JAXjnp.ndarray
objects.
- return_dict (
bool
, defaults toFalse
) — Whether to return a dictionary with named outputs. Has no effect if tokenize isFalse
. - tokenizer_kwargs (
Dict[str -- Any]
, optional): Additional kwargs to pass to the tokenizer. - return_assistant_tokens_mask (
bool
, defaults toFalse
) — Whether to return a mask of the assistant generated tokens. For tokens generated by the assistant, the mask will contain 1. For user and system tokens, the mask will contain 0. This functionality is only available for chat templates that support it via the{% generation %}
keyword. **kwargs — Additional kwargs to pass to the template renderer. Will be accessible by the chat template.
Returns
Union[List[int], Dict]
A list of token ids representing the tokenized chat so far, including control tokens. This
output is ready to pass to the model, either directly or via methods like generate()
. If return_dict
is
set, will return a dict of tokenizer outputs instead.
Converts a list of dictionaries with "role"
and "content"
keys to a list of token
ids. This method is intended for use with chat models, and will read the tokenizer’s chat_template attribute to
determine the format and control tokens to use when converting.
batch_decode
< source >( sequences: Union skip_special_tokens: bool = False clean_up_tokenization_spaces: bool = None **kwargs ) → List[str]
Parameters
- sequences (
Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]
) — List of tokenized input ids. Can be obtained using the__call__
method. - skip_special_tokens (
bool
, optional, defaults toFalse
) — Whether or not to remove special tokens in the decoding. - clean_up_tokenization_spaces (
bool
, optional) — Whether or not to clean up the tokenization spaces. IfNone
, will default toself.clean_up_tokenization_spaces
. - kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
Returns
List[str]
The list of decoded sentences.
Convert a list of lists of token ids into a list of strings by calling decode.
decode
< source >( token_ids: Union skip_special_tokens: bool = False clean_up_tokenization_spaces: bool = None **kwargs ) → str
Parameters
- token_ids (
Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]
) — List of tokenized input ids. Can be obtained using the__call__
method. - skip_special_tokens (
bool
, optional, defaults toFalse
) — Whether or not to remove special tokens in the decoding. - clean_up_tokenization_spaces (
bool
, optional) — Whether or not to clean up the tokenization spaces. IfNone
, will default toself.clean_up_tokenization_spaces
. - kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
Returns
str
The decoded sentence.
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.
Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))
.
encode
< source >( text: Union text_pair: Union = None add_special_tokens: bool = True padding: Union = False truncation: Union = None max_length: Optional = None stride: int = 0 padding_side: Optional = None return_tensors: Union = None **kwargs ) → List[int]
, torch.Tensor
, tf.Tensor
or np.ndarray
Parameters
- text (
str
,List[str]
orList[int]
) — The first sequence to be encoded. This can be a string, a list of strings (tokenized string using thetokenize
method) or a list of integers (tokenized string ids using theconvert_tokens_to_ids
method). - text_pair (
str
,List[str]
orList[int]
, optional) — Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using thetokenize
method) or a list of integers (tokenized string ids using theconvert_tokens_to_ids
method). - add_special_tokens (
bool
, optional, defaults toTrue
) — Whether or not to add special tokens when encoding the sequences. This will use the underlyingPretrainedTokenizerBase.build_inputs_with_special_tokens
function, which defines which tokens are automatically added to the input ids. This is usefull if you want to addbos
oreos
tokens automatically. - padding (
bool
,str
or PaddingStrategy, optional, defaults toFalse
) — Activates and controls padding. Accepts the following values:True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
- truncation (
bool
,str
or TruncationStrategy, optional, defaults toFalse
) — Activates and controls truncation. Accepts the following values:True
or'longest_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.'only_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.'only_second'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.False
or'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
- max_length (
int
, optional) — Controls the maximum length to use by one of the truncation/padding parameters.If left unset or set to
None
, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated. - stride (
int
, optional, defaults to 0) — If set to a number along withmax_length
, the overflowing tokens returned whenreturn_overflowing_tokens=True
will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens. - is_split_into_words (
bool
, optional, defaults toFalse
) — Whether or not the input is already pre-tokenized (e.g., split into words). If set toTrue
, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification. - pad_to_multiple_of (
int
, optional) — If set will pad the sequence to a multiple of the provided value. Requirespadding
to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability>= 7.5
(Volta). - padding_side (
str
, optional) — The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name. - return_tensors (
str
or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:'tf'
: Return TensorFlowtf.constant
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return Numpynp.ndarray
objects.
**kwargs — Passed along to the
.tokenize()
method.
Returns
List[int]
, torch.Tensor
, tf.Tensor
or np.ndarray
The tokenized ids of the text.
Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.
Same as doing self.convert_tokens_to_ids(self.tokenize(text))
.
push_to_hub
< source >( repo_id: str use_temp_dir: Optional = None commit_message: Optional = None private: Optional = None token: Union = None max_shard_size: Union = '5GB' create_pr: bool = False safe_serialization: bool = True revision: str = None commit_description: str = None tags: Optional = None **deprecated_kwargs )
Parameters
- repo_id (
str
) — The name of the repository you want to push your tokenizer to. It should contain your organization name when pushing to a given organization. - use_temp_dir (
bool
, optional) — Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default toTrue
if there is no directory named likerepo_id
,False
otherwise. - commit_message (
str
, optional) — Message to commit while pushing. Will default to"Upload tokenizer"
. - private (
bool
, optional) — Whether or not the repository created should be private. - token (
bool
orstr
, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, will use the token generated when runninghuggingface-cli login
(stored in~/.huggingface
). Will default toTrue
ifrepo_url
is not specified. - max_shard_size (
int
orstr
, optional, defaults to"5GB"
) — Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like"5MB"
). We default it to"5GB"
so that users can easily load models on free-tier Google Colab instances without any CPU OOM issues. - create_pr (
bool
, optional, defaults toFalse
) — Whether or not to create a PR with the uploaded files or directly commit. - safe_serialization (
bool
, optional, defaults toTrue
) — Whether or not to convert the model weights in safetensors format for safer serialization. - revision (
str
, optional) — Branch to push the uploaded files to. - commit_description (
str
, optional) — The description of the commit that will be created - tags (
List[str]
, optional) — List of tags to push on the Hub.
Upload the tokenizer files to the 🤗 Model Hub.
Examples:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")
# Push the tokenizer to your namespace with the name "my-finetuned-bert".
tokenizer.push_to_hub("my-finetuned-bert")
# Push the tokenizer to an organization with the name "my-finetuned-bert".
tokenizer.push_to_hub("huggingface/my-finetuned-bert")
convert_ids_to_tokens
< source >( ids: Union skip_special_tokens: bool = False ) → str
or List[str]
Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens.
convert_tokens_to_ids
< source >( tokens: Union ) → int
or List[int]
Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the vocabulary.
Returns the added tokens in the vocabulary as a dictionary of token to index. Results might be different from the fast call because for now we always add the tokens even if they are already in the vocabulary. This is something we should change.
num_special_tokens_to_add
< source >( pair: bool = False ) → int
Returns the number of added tokens when encoding a sequence with special tokens.
This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.
prepare_for_tokenization
< source >( text: str is_split_into_words: bool = False **kwargs ) → Tuple[str, Dict[str, Any]]
Parameters
- text (
str
) — The text to prepare. - is_split_into_words (
bool
, optional, defaults toFalse
) — Whether or not the input is already pre-tokenized (e.g., split into words). If set toTrue
, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification. - kwargs (
Dict[str, Any]
, optional) — Keyword arguments to use for the tokenization.
Returns
Tuple[str, Dict[str, Any]]
The prepared text and the unused kwargs.
Performs any necessary transformations before tokenization.
This method should pop the arguments from kwargs and return the remaining kwargs
as well. We test the
kwargs
at the end of the encoding process to be sure all the arguments have been used.
tokenize
< source >( text: str **kwargs ) → List[str]
Converts a string into a sequence of tokens, using the tokenizer.
Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces). Takes care of added tokens.
PreTrainedTokenizerFast
PreTrainedTokenizerFast は tokenizers ライブラリに依存します。 🤗 トークナイザー ライブラリから取得したトークナイザーは、 🤗 トランスに非常に簡単にロードされます。これがどのように行われるかを理解するには、🤗 tokenizers からの tokenizers を使用する ページを参照してください。
class transformers.PreTrainedTokenizerFast
< source >( *args **kwargs )
Parameters
- model_max_length (
int
, optional) — The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with from_pretrained(), this will be set to the value stored for the associated model inmax_model_input_sizes
(see above). If no value is provided, will default to VERY_LARGE_INTEGER (int(1e30)
). - padding_side (
str
, optional) — The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name. - truncation_side (
str
, optional) — The side on which the model should have truncation applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name. - chat_template (
str
, optional) — A Jinja template string that will be used to format lists of chat messages. See https://huggingface.co/docs/transformers/chat_templating for a full description. - model_input_names (
List[string]
, optional) — The list of inputs accepted by the forward pass of the model (like"token_type_ids"
or"attention_mask"
). Default value is picked from the class attribute of the same name. - bos_token (
str
ortokenizers.AddedToken
, optional) — A special token representing the beginning of a sentence. Will be associated toself.bos_token
andself.bos_token_id
. - eos_token (
str
ortokenizers.AddedToken
, optional) — A special token representing the end of a sentence. Will be associated toself.eos_token
andself.eos_token_id
. - unk_token (
str
ortokenizers.AddedToken
, optional) — A special token representing an out-of-vocabulary token. Will be associated toself.unk_token
andself.unk_token_id
. - sep_token (
str
ortokenizers.AddedToken
, optional) — A special token separating two different sentences in the same input (used by BERT for instance). Will be associated toself.sep_token
andself.sep_token_id
. - pad_token (
str
ortokenizers.AddedToken
, optional) — A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated toself.pad_token
andself.pad_token_id
. - cls_token (
str
ortokenizers.AddedToken
, optional) — A special token representing the class of the input (used by BERT for instance). Will be associated toself.cls_token
andself.cls_token_id
. - mask_token (
str
ortokenizers.AddedToken
, optional) — A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated toself.mask_token
andself.mask_token_id
. - additional_special_tokens (tuple or list of
str
ortokenizers.AddedToken
, optional) — A tuple or a list of additional special tokens. Add them here to ensure they are skipped when decoding withskip_special_tokens
is set to True. If they are not part of the vocabulary, they will be added at the end of the vocabulary. - clean_up_tokenization_spaces (
bool
, optional, defaults toTrue
) — Whether or not the model should cleanup the spaces that were added when splitting the input text during the tokenization process. - split_special_tokens (
bool
, optional, defaults toFalse
) — Whether or not the special tokens should be split during the tokenization process. Passing will affect the internal state of the tokenizer. The default behavior is to not split special tokens. This means that if<s>
is thebos_token
, thentokenizer.tokenize("<s>") = ['<s>
]. Otherwise, ifsplit_special_tokens=True
, thentokenizer.tokenize("<s>")
will be give['<','s', '>']
. - tokenizer_object (
tokenizers.Tokenizer
) — Atokenizers.Tokenizer
object from 🤗 tokenizers to instantiate from. See Using tokenizers from 🤗 tokenizers for more information. - tokenizer_file (
str
) — A path to a local JSON file representing a previously serializedtokenizers.Tokenizer
object from 🤗 tokenizers.
Base class for all fast tokenizers (wrapping HuggingFace tokenizers library).
Inherits from PreTrainedTokenizerBase.
Handles all the shared methods for tokenization and special tokens, as well as methods for downloading/caching/loading pretrained tokenizers, as well as adding tokens to the vocabulary.
This class also contains the added tokens in a unified way on top of all tokenizers so we don’t have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece…).
Class attributes (overridden by derived classes)
- vocab_files_names (
Dict[str, str]
) — A dictionary with, as keys, the__init__
keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string). - pretrained_vocab_files_map (
Dict[str, Dict[str, str]]
) — A dictionary of dictionaries, with the high-level keys being the__init__
keyword name of each vocabulary file required by the model, the low-level being theshort-cut-names
of the pretrained models with, as associated values, theurl
to the associated pretrained vocabulary file. - model_input_names (
List[str]
) — A list of inputs expected in the forward pass of the model. - padding_side (
str
) — The default value for the side on which the model should have padding applied. Should be'right'
or'left'
. - truncation_side (
str
) — The default value for the side on which the model should have truncation applied. Should be'right'
or'left'
.
__call__
< source >( text: Union = None text_pair: Union = None text_target: Union = None text_pair_target: Union = None add_special_tokens: bool = True padding: Union = False truncation: Union = None max_length: Optional = None stride: int = 0 is_split_into_words: bool = False pad_to_multiple_of: Optional = None padding_side: Optional = None return_tensors: Union = None return_token_type_ids: Optional = None return_attention_mask: Optional = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True **kwargs ) → BatchEncoding
Parameters
- text (
str
,List[str]
,List[List[str]]
, optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must setis_split_into_words=True
(to lift the ambiguity with a batch of sequences). - text_pair (
str
,List[str]
,List[List[str]]
, optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must setis_split_into_words=True
(to lift the ambiguity with a batch of sequences). - text_target (
str
,List[str]
,List[List[str]]
, optional) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must setis_split_into_words=True
(to lift the ambiguity with a batch of sequences). - text_pair_target (
str
,List[str]
,List[List[str]]
, optional) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must setis_split_into_words=True
(to lift the ambiguity with a batch of sequences). - add_special_tokens (
bool
, optional, defaults toTrue
) — Whether or not to add special tokens when encoding the sequences. This will use the underlyingPretrainedTokenizerBase.build_inputs_with_special_tokens
function, which defines which tokens are automatically added to the input ids. This is usefull if you want to addbos
oreos
tokens automatically. - padding (
bool
,str
or PaddingStrategy, optional, defaults toFalse
) — Activates and controls padding. Accepts the following values:True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
- truncation (
bool
,str
or TruncationStrategy, optional, defaults toFalse
) — Activates and controls truncation. Accepts the following values:True
or'longest_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.'only_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.'only_second'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.False
or'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
- max_length (
int
, optional) — Controls the maximum length to use by one of the truncation/padding parameters.If left unset or set to
None
, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated. - stride (
int
, optional, defaults to 0) — If set to a number along withmax_length
, the overflowing tokens returned whenreturn_overflowing_tokens=True
will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens. - is_split_into_words (
bool
, optional, defaults toFalse
) — Whether or not the input is already pre-tokenized (e.g., split into words). If set toTrue
, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification. - pad_to_multiple_of (
int
, optional) — If set will pad the sequence to a multiple of the provided value. Requirespadding
to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability>= 7.5
(Volta). - padding_side (
str
, optional) — The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name. - return_tensors (
str
or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:'tf'
: Return TensorFlowtf.constant
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return Numpynp.ndarray
objects.
- return_token_type_ids (
bool
, optional) — Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by thereturn_outputs
attribute. - return_attention_mask (
bool
, optional) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by thereturn_outputs
attribute. - return_overflowing_tokens (
bool
, optional, defaults toFalse
) — Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided withtruncation_strategy = longest_first
orTrue
, an error is raised instead of returning overflowing tokens. - return_special_tokens_mask (
bool
, optional, defaults toFalse
) — Whether or not to return special tokens mask information. - return_offsets_mapping (
bool
, optional, defaults toFalse
) — Whether or not to return(char_start, char_end)
for each token.This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using Python’s tokenizer, this method will raise
NotImplementedError
. - return_length (
bool
, optional, defaults toFalse
) — Whether or not to return the lengths of the encoded inputs. - verbose (
bool
, optional, defaults toTrue
) — Whether or not to print more information and warnings. **kwargs — passed to theself.tokenize()
method
Returns
A BatchEncoding with the following fields:
-
input_ids — List of token ids to be fed to a model.
-
token_type_ids — List of token type ids to be fed to a model (when
return_token_type_ids=True
or if “token_type_ids” is inself.model_input_names
). -
attention_mask — List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True
or if “attention_mask” is inself.model_input_names
). -
overflowing_tokens — List of overflowing tokens sequences (when a
max_length
is specified andreturn_overflowing_tokens=True
). -
num_truncated_tokens — Number of tokens truncated (when a
max_length
is specified andreturn_overflowing_tokens=True
). -
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when
add_special_tokens=True
andreturn_special_tokens_mask=True
). -
length — The length of the inputs (when
return_length=True
)
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.
apply_chat_template
< source >( conversation: Union tools: Optional = None documents: Optional = None chat_template: Optional = None add_generation_prompt: bool = False continue_final_message: bool = False tokenize: bool = True padding: bool = False truncation: bool = False max_length: Optional = None return_tensors: Union = None return_dict: bool = False return_assistant_tokens_mask: bool = False tokenizer_kwargs: Optional = None **kwargs ) → Union[List[int], Dict]
Parameters
- conversation (Union[List[Dict[str, str]], List[List[Dict[str, str]]]]) — A list of dicts with “role” and “content” keys, representing the chat history so far.
- tools (
List[Dict]
, optional) — A list of tools (callable functions) that will be accessible to the model. If the template does not support function calling, this argument will have no effect. Each tool should be passed as a JSON Schema, giving the name, description and argument types for the tool. See our chat templating guide for more information. - documents (
List[Dict[str, str]]
, optional) — A list of dicts representing documents that will be accessible to the model if it is performing RAG (retrieval-augmented generation). If the template does not support RAG, this argument will have no effect. We recommend that each document should be a dict containing “title” and “text” keys. Please see the RAG section of the chat templating guide for examples of passing documents with chat templates. - chat_template (
str
, optional) — A Jinja template to use for this conversion. It is usually not necessary to pass anything to this argument, as the model’s template will be used by default. - add_generation_prompt (bool, optional) — If this is set, a prompt with the token(s) that indicate the start of an assistant message will be appended to the formatted output. This is useful when you want to generate a response from the model. Note that this argument will be passed to the chat template, and so it must be supported in the template for this argument to have any effect.
- continue_final_message (bool, optional) —
If this is set, the chat will be formatted so that the final
message in the chat is open-ended, without any EOS tokens. The model will continue this message
rather than starting a new one. This allows you to “prefill” part of
the model’s response for it. Cannot be used at the same time as
add_generation_prompt
. - tokenize (
bool
, defaults toTrue
) — Whether to tokenize the output. IfFalse
, the output will be a string. - padding (
bool
, defaults toFalse
) — Whether to pad sequences to the maximum length. Has no effect if tokenize isFalse
. - truncation (
bool
, defaults toFalse
) — Whether to truncate sequences at the maximum length. Has no effect if tokenize isFalse
. - max_length (
int
, optional) — Maximum length (in tokens) to use for padding or truncation. Has no effect if tokenize isFalse
. If not specified, the tokenizer’smax_length
attribute will be used as a default. - return_tensors (
str
or TensorType, optional) — If set, will return tensors of a particular framework. Has no effect if tokenize isFalse
. Acceptable values are:'tf'
: Return TensorFlowtf.Tensor
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return NumPynp.ndarray
objects.'jax'
: Return JAXjnp.ndarray
objects.
- return_dict (
bool
, defaults toFalse
) — Whether to return a dictionary with named outputs. Has no effect if tokenize isFalse
. - tokenizer_kwargs (
Dict[str -- Any]
, optional): Additional kwargs to pass to the tokenizer. - return_assistant_tokens_mask (
bool
, defaults toFalse
) — Whether to return a mask of the assistant generated tokens. For tokens generated by the assistant, the mask will contain 1. For user and system tokens, the mask will contain 0. This functionality is only available for chat templates that support it via the{% generation %}
keyword. **kwargs — Additional kwargs to pass to the template renderer. Will be accessible by the chat template.
Returns
Union[List[int], Dict]
A list of token ids representing the tokenized chat so far, including control tokens. This
output is ready to pass to the model, either directly or via methods like generate()
. If return_dict
is
set, will return a dict of tokenizer outputs instead.
Converts a list of dictionaries with "role"
and "content"
keys to a list of token
ids. This method is intended for use with chat models, and will read the tokenizer’s chat_template attribute to
determine the format and control tokens to use when converting.
batch_decode
< source >( sequences: Union skip_special_tokens: bool = False clean_up_tokenization_spaces: bool = None **kwargs ) → List[str]
Parameters
- sequences (
Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]
) — List of tokenized input ids. Can be obtained using the__call__
method. - skip_special_tokens (
bool
, optional, defaults toFalse
) — Whether or not to remove special tokens in the decoding. - clean_up_tokenization_spaces (
bool
, optional) — Whether or not to clean up the tokenization spaces. IfNone
, will default toself.clean_up_tokenization_spaces
. - kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
Returns
List[str]
The list of decoded sentences.
Convert a list of lists of token ids into a list of strings by calling decode.
decode
< source >( token_ids: Union skip_special_tokens: bool = False clean_up_tokenization_spaces: bool = None **kwargs ) → str
Parameters
- token_ids (
Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]
) — List of tokenized input ids. Can be obtained using the__call__
method. - skip_special_tokens (
bool
, optional, defaults toFalse
) — Whether or not to remove special tokens in the decoding. - clean_up_tokenization_spaces (
bool
, optional) — Whether or not to clean up the tokenization spaces. IfNone
, will default toself.clean_up_tokenization_spaces
. - kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
Returns
str
The decoded sentence.
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.
Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))
.
encode
< source >( text: Union text_pair: Union = None add_special_tokens: bool = True padding: Union = False truncation: Union = None max_length: Optional = None stride: int = 0 padding_side: Optional = None return_tensors: Union = None **kwargs ) → List[int]
, torch.Tensor
, tf.Tensor
or np.ndarray
Parameters
- text (
str
,List[str]
orList[int]
) — The first sequence to be encoded. This can be a string, a list of strings (tokenized string using thetokenize
method) or a list of integers (tokenized string ids using theconvert_tokens_to_ids
method). - text_pair (
str
,List[str]
orList[int]
, optional) — Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using thetokenize
method) or a list of integers (tokenized string ids using theconvert_tokens_to_ids
method). - add_special_tokens (
bool
, optional, defaults toTrue
) — Whether or not to add special tokens when encoding the sequences. This will use the underlyingPretrainedTokenizerBase.build_inputs_with_special_tokens
function, which defines which tokens are automatically added to the input ids. This is usefull if you want to addbos
oreos
tokens automatically. - padding (
bool
,str
or PaddingStrategy, optional, defaults toFalse
) — Activates and controls padding. Accepts the following values:True
or'longest'
: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).'max_length'
: Pad to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided.False
or'do_not_pad'
(default): No padding (i.e., can output a batch with sequences of different lengths).
- truncation (
bool
,str
or TruncationStrategy, optional, defaults toFalse
) — Activates and controls truncation. Accepts the following values:True
or'longest_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.'only_first'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.'only_second'
: Truncate to a maximum length specified with the argumentmax_length
or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.False
or'do_not_truncate'
(default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
- max_length (
int
, optional) — Controls the maximum length to use by one of the truncation/padding parameters.If left unset or set to
None
, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated. - stride (
int
, optional, defaults to 0) — If set to a number along withmax_length
, the overflowing tokens returned whenreturn_overflowing_tokens=True
will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens. - is_split_into_words (
bool
, optional, defaults toFalse
) — Whether or not the input is already pre-tokenized (e.g., split into words). If set toTrue
, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification. - pad_to_multiple_of (
int
, optional) — If set will pad the sequence to a multiple of the provided value. Requirespadding
to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability>= 7.5
(Volta). - padding_side (
str
, optional) — The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name. - return_tensors (
str
or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:'tf'
: Return TensorFlowtf.constant
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return Numpynp.ndarray
objects.
**kwargs — Passed along to the
.tokenize()
method.
Returns
List[int]
, torch.Tensor
, tf.Tensor
or np.ndarray
The tokenized ids of the text.
Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.
Same as doing self.convert_tokens_to_ids(self.tokenize(text))
.
push_to_hub
< source >( repo_id: str use_temp_dir: Optional = None commit_message: Optional = None private: Optional = None token: Union = None max_shard_size: Union = '5GB' create_pr: bool = False safe_serialization: bool = True revision: str = None commit_description: str = None tags: Optional = None **deprecated_kwargs )
Parameters
- repo_id (
str
) — The name of the repository you want to push your tokenizer to. It should contain your organization name when pushing to a given organization. - use_temp_dir (
bool
, optional) — Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default toTrue
if there is no directory named likerepo_id
,False
otherwise. - commit_message (
str
, optional) — Message to commit while pushing. Will default to"Upload tokenizer"
. - private (
bool
, optional) — Whether or not the repository created should be private. - token (
bool
orstr
, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, will use the token generated when runninghuggingface-cli login
(stored in~/.huggingface
). Will default toTrue
ifrepo_url
is not specified. - max_shard_size (
int
orstr
, optional, defaults to"5GB"
) — Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like"5MB"
). We default it to"5GB"
so that users can easily load models on free-tier Google Colab instances without any CPU OOM issues. - create_pr (
bool
, optional, defaults toFalse
) — Whether or not to create a PR with the uploaded files or directly commit. - safe_serialization (
bool
, optional, defaults toTrue
) — Whether or not to convert the model weights in safetensors format for safer serialization. - revision (
str
, optional) — Branch to push the uploaded files to. - commit_description (
str
, optional) — The description of the commit that will be created - tags (
List[str]
, optional) — List of tags to push on the Hub.
Upload the tokenizer files to the 🤗 Model Hub.
Examples:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")
# Push the tokenizer to your namespace with the name "my-finetuned-bert".
tokenizer.push_to_hub("my-finetuned-bert")
# Push the tokenizer to an organization with the name "my-finetuned-bert".
tokenizer.push_to_hub("huggingface/my-finetuned-bert")
convert_ids_to_tokens
< source >( ids: Union skip_special_tokens: bool = False ) → str
or List[str]
Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens.
convert_tokens_to_ids
< source >( tokens: Union ) → int
or List[int]
Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the vocabulary.
Returns the added tokens in the vocabulary as a dictionary of token to index.
num_special_tokens_to_add
< source >( pair: bool = False ) → int
Returns the number of added tokens when encoding a sequence with special tokens.
This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.
set_truncation_and_padding
< source >( padding_strategy: PaddingStrategy truncation_strategy: TruncationStrategy max_length: int stride: int pad_to_multiple_of: Optional padding_side: Optional )
Parameters
- padding_strategy (PaddingStrategy) — The kind of padding that will be applied to the input
- truncation_strategy (TruncationStrategy) — The kind of truncation that will be applied to the input
- max_length (
int
) — The maximum size of a sequence. - stride (
int
) — The stride to use when handling overflow. - pad_to_multiple_of (
int
, optional) — If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability>= 7.5
(Volta). - padding_side (
str
, optional) — The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.
Define the truncation and the padding strategies for fast tokenizers (provided by HuggingFace tokenizers library) and restore the tokenizer settings afterwards.
The provided tokenizer has no padding / truncation strategy before the managed section. If your tokenizer set a padding / truncation strategy before, then it will be reset to no padding / truncation when exiting the managed section.
train_new_from_iterator
< source >( text_iterator vocab_size length = None new_special_tokens = None special_tokens_map = None **kwargs ) → PreTrainedTokenizerFast
Parameters
- text_iterator (generator of
List[str]
) — The training corpus. Should be a generator of batches of texts, for instance a list of lists of texts if you have everything in memory. - vocab_size (
int
) — The size of the vocabulary you want for your tokenizer. - length (
int
, optional) — The total number of sequences in the iterator. This is used to provide meaningful progress tracking - new_special_tokens (list of
str
orAddedToken
, optional) — A list of new special tokens to add to the tokenizer you are training. - special_tokens_map (
Dict[str, str]
, optional) — If you want to rename some of the special tokens this tokenizer uses, pass along a mapping old special token name to new special token name in this argument. - kwargs (
Dict[str, Any]
, optional) — Additional keyword arguments passed along to the trainer from the 🤗 Tokenizers library.
Returns
A new tokenizer of the same type as the original one, trained on
text_iterator
.
Trains a tokenizer on a new corpus with the same defaults (in terms of special tokens or tokenization pipeline) as the current one.
BatchEncoding
class transformers.BatchEncoding
< source >( data: Optional = None encoding: Union = None tensor_type: Union = None prepend_batch_axis: bool = False n_sequences: Optional = None )
Parameters
- data (
dict
, optional) — Dictionary of lists/arrays/tensors returned by the__call__
/encode_plus
/batch_encode_plus
methods (‘input_ids’, ‘attention_mask’, etc.). - encoding (
tokenizers.Encoding
orSequence[tokenizers.Encoding]
, optional) — If the tokenizer is a fast tokenizer which outputs additional information like mapping from word/character space to token space thetokenizers.Encoding
instance or list of instance (for batches) hold this information. - tensor_type (
Union[None, str, TensorType]
, optional) — You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization. - prepend_batch_axis (
bool
, optional, defaults toFalse
) — Whether or not to add a batch axis when converting to tensors (seetensor_type
above). Note that this parameter has an effect if the parametertensor_type
is set, otherwise has no effect. - n_sequences (
Optional[int]
, optional) — You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization.
Holds the output of the call(), encode_plus() and batch_encode_plus() methods (tokens, attention_masks, etc).
This class is derived from a python dictionary and can be used as a dictionary. In addition, this class exposes utility methods to map from word/character space to token space.
char_to_token
< source >( batch_or_char_index: int char_index: Optional = None sequence_index: int = 0 ) → int
Parameters
- batch_or_char_index (
int
) — Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the word in the sequence - char_index (
int
, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence. - sequence_index (
int
, optional, defaults to 0) — If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided character index belongs to.
Returns
int
Index of the token, or None if the char index refers to a whitespace only token and whitespace is
trimmed with trim_offsets=True
.
Get the index of the token in the encoded output comprising a character in the original string for a sequence of the batch.
Can be called as:
self.char_to_token(char_index)
if batch size is 1self.char_to_token(batch_index, char_index)
if batch size is greater or equal to 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.
char_to_word
< source >( batch_or_char_index: int char_index: Optional = None sequence_index: int = 0 ) → int
or List[int]
Parameters
- batch_or_char_index (
int
) — Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the character in the original string. - char_index (
int
, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the character in the original string. - sequence_index (
int
, optional, defaults to 0) — If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided character index belongs to.
Returns
int
or List[int]
Index or indices of the associated encoded token(s).
Get the word in the original string corresponding to a character in the original string of a sequence of the batch.
Can be called as:
self.char_to_word(char_index)
if batch size is 1self.char_to_word(batch_index, char_index)
if batch size is greater than 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.
convert_to_tensors
< source >( tensor_type: Union = None prepend_batch_axis: bool = False )
Parameters
- tensor_type (
str
or TensorType, optional) — The type of tensors to use. Ifstr
, should be one of the values of the enum TensorType. IfNone
, no modification is done. - prepend_batch_axis (
int
, optional, defaults toFalse
) — Whether or not to add the batch dimension during the conversion.
Convert the inner content to tensors.
sequence_ids
< source >( batch_index: int = 0 ) → List[Optional[int]]
Parameters
Returns
List[Optional[int]]
A list indicating the sequence id corresponding to each token. Special tokens added
by the tokenizer are mapped to None
and other tokens are mapped to the index of their corresponding
sequence.
Return a list mapping the tokens to the id of their original sentences:
None
for special tokens added around or between sequences,0
for tokens corresponding to words in the first sequence,1
for tokens corresponding to words in the second sequence when a pair of sequences was jointly encoded.
to
< source >( device: Union ) → BatchEncoding
Parameters
Returns
The same instance after modification.
Send all values to device by calling v.to(device)
(PyTorch only).
token_to_chars
< source >( batch_or_token_index: int token_index: Optional = None ) → CharSpan
Parameters
- batch_or_token_index (
int
) — Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the token in the sequence. - token_index (
int
, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the token or tokens in the sequence.
Returns
Span of characters in the original string, or None, if the token
(e.g. , ) doesn’t correspond to any chars in the origin string.
Get the character span corresponding to an encoded token in a sequence of the batch.
Character spans are returned as a CharSpan with:
- start — Index of the first character in the original string associated to the token.
- end — Index of the character following the last character in the original string associated to the token.
Can be called as:
self.token_to_chars(token_index)
if batch size is 1self.token_to_chars(batch_index, token_index)
if batch size is greater or equal to 1
token_to_sequence
< source >( batch_or_token_index: int token_index: Optional = None ) → int
Parameters
- batch_or_token_index (
int
) — Index of the sequence in the batch. If the batch only comprises one sequence, this can be the index of the token in the sequence. - token_index (
int
, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the token in the sequence.
Returns
int
Index of the word in the input sequence.
Get the index of the sequence represented by the given token. In the general use case, this method returns 0
for a single sequence or the first sequence of a pair, and 1
for the second sequence of a pair
Can be called as:
self.token_to_sequence(token_index)
if batch size is 1self.token_to_sequence(batch_index, token_index)
if batch size is greater than 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e., words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.
token_to_word
< source >( batch_or_token_index: int token_index: Optional = None ) → int
Parameters
- batch_or_token_index (
int
) — Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the token in the sequence. - token_index (
int
, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the token in the sequence.
Returns
int
Index of the word in the input sequence.
Get the index of the word corresponding (i.e. comprising) to an encoded token in a sequence of the batch.
Can be called as:
self.token_to_word(token_index)
if batch size is 1self.token_to_word(batch_index, token_index)
if batch size is greater than 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e., words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.
tokens
< source >( batch_index: int = 0 ) → List[str]
Return the list of tokens (sub-parts of the input strings after word/subword splitting and before conversion to integer indices) at a given batch index (only works for the output of a fast tokenizer).
word_ids
< source >( batch_index: int = 0 ) → List[Optional[int]]
Parameters
Returns
List[Optional[int]]
A list indicating the word corresponding to each token. Special tokens added by the
tokenizer are mapped to None
and other tokens are mapped to the index of their corresponding word
(several tokens will be mapped to the same word index if they are parts of that word).
Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.
word_to_chars
< source >( batch_or_word_index: int word_index: Optional = None sequence_index: int = 0 ) → CharSpan
or List[CharSpan]
Parameters
- batch_or_word_index (
int
) — Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the word in the sequence - word_index (
int
, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence. - sequence_index (
int
, optional, defaults to 0) — If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided word index belongs to.
Returns
CharSpan
or List[CharSpan]
Span(s) of the associated character or characters in the string. CharSpan are NamedTuple with:
- start: index of the first character associated to the token in the original string
- end: index of the character following the last character associated to the token in the original string
Get the character span in the original string corresponding to given word in a sequence of the batch.
Character spans are returned as a CharSpan NamedTuple with:
- start: index of the first character in the original string
- end: index of the character following the last character in the original string
Can be called as:
self.word_to_chars(word_index)
if batch size is 1self.word_to_chars(batch_index, word_index)
if batch size is greater or equal to 1
word_to_tokens
< source >( batch_or_word_index: int word_index: Optional = None sequence_index: int = 0 ) → (TokenSpan, optional)
Parameters
- batch_or_word_index (
int
) — Index of the sequence in the batch. If the batch only comprises one sequence, this can be the index of the word in the sequence. - word_index (
int
, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence. - sequence_index (
int
, optional, defaults to 0) — If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided word index belongs to.
Returns
(TokenSpan, optional)
Span of tokens in the encoded sequence. Returns
None
if no tokens correspond to the word. This can happen especially when the token is a special token
that has been used to format the tokenization. For example when we add a class token at the very beginning
of the tokenization.
Get the encoded token span corresponding to a word in a sequence of the batch.
Token spans are returned as a TokenSpan with:
- start — Index of the first token.
- end — Index of the token following the last token.
Can be called as:
self.word_to_tokens(word_index, sequence_index: int = 0)
if batch size is 1self.word_to_tokens(batch_index, word_index, sequence_index: int = 0)
if batch size is greater or equal to 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.
words
< source >( batch_index: int = 0 ) → List[Optional[int]]
Parameters
Returns
List[Optional[int]]
A list indicating the word corresponding to each token. Special tokens added by the
tokenizer are mapped to None
and other tokens are mapped to the index of their corresponding word
(several tokens will be mapped to the same word index if they are parts of that word).
Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.