BARThez¶
Overview¶
The BARThez model was proposed in BARThez: a Skilled Pretrained French Sequence-to-Sequence Model <https://arxiv.org/abs/2010.12321>`__ by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis on 23 Oct, 2020.
The abstract of the paper:
Inductive transfer learning, enabled by self-supervised learning, have taken the entire Natural Language Processing (NLP) field by storm, with models such as BERT and BART setting new state of the art on countless natural language understanding tasks. While there are some notable exceptions, most of the available models and research have been conducted for the English language. In this work, we introduce BARThez, the first BART model for the French language (to the best of our knowledge). BARThez was pretrained on a very large monolingual French corpus from past research that we adapted to suit BART’s perturbation schemes. Unlike already existing BERT-based French language models such as CamemBERT and FlauBERT, BARThez is particularly well-suited for generative tasks, since not only its encoder but also its decoder is pretrained. In addition to discriminative tasks from the FLUE benchmark, we evaluate BARThez on a novel summarization dataset, OrangeSum, that we release with this paper. We also continue the pretraining of an already pretrained multilingual BART on BARThez’s corpus, and we show that the resulting model, which we call mBARTHez, provides a significant boost over vanilla BARThez, and is on par with or outperforms CamemBERT and FlauBERT.
The Authors’ code can be found here.
Examples¶
BARThez can be fine-tuned on sequence-to-sequence tasks in a similar way as BART, check: examples/seq2seq/.
BarthezTokenizer¶
-
class
transformers.
BarthezTokenizer
(vocab_file, bos_token='<s>', eos_token='</s>', sep_token='</s>', cls_token='<s>', unk_token='<unk>', pad_token='<pad>', mask_token='<mask>', **kwargs)[source]¶ Adapted from
CamembertTokenizer
andBartTokenizer
. Construct a BARThez tokenizer. Based on SentencePiece.This tokenizer inherits from
PreTrainedTokenizer
which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.- Parameters
vocab_file (
str
) – SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer.bos_token (
str
, optional, defaults to"<s>"
) –The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
Note
When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the
cls_token
.eos_token (
str
, optional, defaults to"</s>"
) –The end of sequence token.
Note
When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the
sep_token
.sep_token (
str
, optional, defaults to"</s>"
) – The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.cls_token (
str
, optional, defaults to"<s>"
) – The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.unk_token (
str
, optional, defaults to"<unk>"
) – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.pad_token (
str
, optional, defaults to"<pad>"
) – The token used for padding, for example when batching sequences of different lengths.mask_token (
str
, optional, defaults to"<mask>"
) – The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.additional_special_tokens (
List[str]
, optional, defaults to["<s>NOTUSED", "</s>NOTUSED"]
) – Additional special tokens used by the tokenizer.
Attributes: sp_model (
SentencePieceProcessor
): The SentencePiece processor that is used for every conversion (string, tokens and IDs).-
build_inputs_with_special_tokens
(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]¶ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BARThez sequence has the following format:
single sequence:
<s> X </s>
pair of sequences:
<s> A </s></s> B </s>
- Parameters
token_ids_0 (
List[int]
) – List of IDs to which the special tokens will be added.token_ids_1 (
List[int]
, optional) – Optional second list of IDs for sequence pairs.
- Returns
List of input IDs with the appropriate special tokens.
- Return type
List[int]
-
convert_tokens_to_string
(tokens)[source]¶ Converts a sequence of tokens (strings for sub-words) in a single string.
-
create_token_type_ids_from_sequences
(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]¶ Create a mask from the two sequences passed to be used in a sequence-pair classification task.
- Parameters
token_ids_0 (
List[int]
) – List of IDs.token_ids_1 (
List[int]
, optional) – Optional second list of IDs for sequence pairs.
- Returns
List of zeros.
- Return type
List[int]
-
get_special_tokens_mask
(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False) → List[int][source]¶ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer
prepare_for_model
method.- Parameters
token_ids_0 (
List[int]
) – List of IDs.token_ids_1 (
List[int]
, optional) – Optional second list of IDs for sequence pairs.already_has_special_tokens (
bool
, optional, defaults toFalse
) – Whether or not the token list is already formatted with special tokens for the model.
- Returns
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
- Return type
List[int]
-
get_vocab
()[source]¶ Returns the vocabulary as a dictionary of token to index.
tokenizer.get_vocab()[token]
is equivalent totokenizer.convert_tokens_to_ids(token)
whentoken
is in the vocab.- Returns
The vocabulary.
- Return type
Dict[str, int]
-
save_vocabulary
(save_directory: str, filename_prefix: Optional[str] = None) → Tuple[str][source]¶ Save only the vocabulary of the tokenizer (vocabulary + added tokens).
This method won’t save the configuration and special token mappings of the tokenizer. Use
_save_pretrained()
to save the whole state of the tokenizer.- Parameters
save_directory (
str
) – The directory in which to save the vocabulary.filename_prefix (
str
, optional) – An optional prefix to add to the named of the saved files.
- Returns
Paths to the files saved.
- Return type
Tuple(str)
-
property
vocab_size
¶ Size of the base vocabulary (without the added tokens).
- Type
int
BarthezTokenizerFast¶
-
class
transformers.
BarthezTokenizerFast
(vocab_file, tokenizer_file=None, bos_token='<s>', eos_token='</s>', sep_token='</s>', cls_token='<s>', unk_token='<unk>', pad_token='<pad>', mask_token='<mask>', **kwargs)[source]¶ Adapted from
CamembertTokenizer
andBartTokenizer
. Construct a “fast” BARThez tokenizer. Based on SentencePiece.This tokenizer inherits from
PreTrainedTokenizerFast
which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.- Parameters
vocab_file (
str
) – SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer.bos_token (
str
, optional, defaults to"<s>"
) –The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
Note
When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the
cls_token
.eos_token (
str
, optional, defaults to"</s>"
) –The end of sequence token.
Note
When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the
sep_token
.sep_token (
str
, optional, defaults to"</s>"
) – The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.cls_token (
str
, optional, defaults to"<s>"
) – The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.unk_token (
str
, optional, defaults to"<unk>"
) – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.pad_token (
str
, optional, defaults to"<pad>"
) – The token used for padding, for example when batching sequences of different lengths.mask_token (
str
, optional, defaults to"<mask>"
) – The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.additional_special_tokens (
List[str]
, optional, defaults to["<s>NOTUSED", "</s>NOTUSED"]
) – Additional special tokens used by the tokenizer.
Attributes: sp_model (
SentencePieceProcessor
): The SentencePiece processor that is used for every conversion (string, tokens and IDs).-
build_inputs_with_special_tokens
(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]¶ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BARThez sequence has the following format:
single sequence:
<s> X </s>
pair of sequences:
<s> A </s></s> B </s>
- Parameters
token_ids_0 (
List[int]
) – List of IDs to which the special tokens will be added.token_ids_1 (
List[int]
, optional) – Optional second list of IDs for sequence pairs.
- Returns
List of input IDs with the appropriate special tokens.
- Return type
List[int]
-
create_token_type_ids_from_sequences
(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None) → List[int][source]¶ Create a mask from the two sequences passed to be used in a sequence-pair classification task.
- Parameters
token_ids_0 (
List[int]
) – List of IDs.token_ids_1 (
List[int]
, optional) – Optional second list of IDs for sequence pairs.
- Returns
List of zeros.
- Return type
List[int]
-
get_special_tokens_mask
(token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False) → List[int][source]¶ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer
prepare_for_model
method.- Parameters
token_ids_0 (
List[int]
) – List of IDs.token_ids_1 (
List[int]
, optional) – Optional second list of IDs for sequence pairs.already_has_special_tokens (
bool
, optional, defaults toFalse
) – Whether or not the token list is already formatted with special tokens for the model.
- Returns
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
- Return type
List[int]
-
save_vocabulary
(save_directory: str, filename_prefix: Optional[str] = None) → Tuple[str][source]¶ Save only the vocabulary of the tokenizer (vocabulary + added tokens).
This method won’t save the configuration and special token mappings of the tokenizer. Use
_save_pretrained()
to save the whole state of the tokenizer.- Parameters
save_directory (
str
) – The directory in which to save the vocabulary.filename_prefix (
str
, optional) – An optional prefix to add to the named of the saved files.
- Returns
Paths to the files saved.
- Return type
Tuple(str)
-
slow_tokenizer_class
¶ alias of
transformers.models.barthez.tokenization_barthez.BarthezTokenizer