( )
Parameters
int
, optional) —
The size of the final vocabulary, including all tokens and alphabet.
int
, optional) —
The minimum frequency a pair should have in order to be merged.
bool
, optional) —
Whether to show progress bars while training.
List[Union[str, AddedToken]]
, optional) —
A list of special tokens the model should know of.
int
, optional) —
The maximum different characters to keep in the alphabet.
List[str]
, optional) —
A list of characters to include in the initial alphabet, even
if not seen in the training dataset.
If the strings contain more than one character, only the first one
is kept.
str
, optional) —
A prefix to be used for every subword that is not a beginning-of-word.
str
, optional) —
A suffix to be used for every subword that is a end-of-word.
Trainer capable of training a BPE model
( vocab_size = 8000 show_progress = True special_tokens = [] shrinking_factor = 0.75 unk_token = None max_piece_length = 16 n_sub_iterations = 2 )
Parameters
int
) —
The size of the final vocabulary, including all tokens and alphabet.
bool
) —
Whether to show progress bars while training.
List[Union[str, AddedToken]]
) —
A list of special tokens the model should know of.
List[str]
) —
A list of characters to include in the initial alphabet, even
if not seen in the training dataset.
If the strings contain more than one character, only the first one
is kept.
float
) —
The shrinking factor used at each step of the training to prune the
vocabulary.
str
) —
The token used for out-of-vocabulary tokens.
int
) —
The maximum length of a given token.
int
) —
The number of iterations of the EM algorithm to perform before
pruning the vocabulary.
Trainer capable of training a Unigram model
( )
Parameters
int
, optional) —
The size of the final vocabulary, including all tokens and alphabet.
int
, optional) —
The minimum frequency a pair should have in order to be merged.
bool
, optional) —
Whether to show progress bars while training.
List[Union[str, AddedToken]]
) —
A list of special tokens the model should know of.
Trainer capable of training a WorldLevel model
( vocab_size = 30000 min_frequency = 0 show_progress = True special_tokens = [] limit_alphabet = None initial_alphabet = [] continuing_subword_prefix = '##' end_of_word_suffix = None )
Parameters
int
, optional) —
The size of the final vocabulary, including all tokens and alphabet.
int
, optional) —
The minimum frequency a pair should have in order to be merged.
bool
, optional) —
Whether to show progress bars while training.
List[Union[str, AddedToken]]
, optional) —
A list of special tokens the model should know of.
int
, optional) —
The maximum different characters to keep in the alphabet.
List[str]
, optional) —
A list of characters to include in the initial alphabet, even
if not seen in the training dataset.
If the strings contain more than one character, only the first one
is kept.
str
, optional) —
A prefix to be used for every subword that is not a beginning-of-word.
str
, optional) —
A suffix to be used for every subword that is a end-of-word.
Trainer capable of training a WordPiece model