1 ---
2 language: fr
3 ---
4
5 # CamemBERT: a Tasty French Language Model
6
7 ## Introduction
8
9 [CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
10
11 It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
12
13 For further information or requests, please go to [Camembert Website](https://camembert-model.fr/)
14
15 ## Pre-trained models
16
17 | Model | #params | Arch. | Training data |
18 |--------------------------------|--------------------------------|-------|-----------------------------------|
19 | `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
20 | `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
21 | `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
22 | `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
23 | `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
24 | `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
25
26 ## How to use CamemBERT with HuggingFace
27
28 ##### Load CamemBERT and its sub-word tokenizer :
29 ```python
30 from transformers import CamembertModel, CamembertTokenizer
31
32 # You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
33 tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-wikipedia-4gb")
34 camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb")
35
36 camembert.eval() # disable dropout (or leave in train mode to finetune)
37
38 ```
39
40 ##### Filling masks using pipeline
41 ```python
42 from transformers import pipeline
43
44 camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-wikipedia-4gb", tokenizer="camembert/camembert-base-wikipedia-4gb")
45 results = camembert_fill_mask("Le camembert est un fromage de <mask>!")
46 # results
47 #[{'sequence': '<s> Le camembert est un fromage de chèvre!</s>', 'score': 0.4937814474105835, 'token': 19370},
48 #{'sequence': '<s> Le camembert est un fromage de brebis!</s>', 'score': 0.06255942583084106, 'token': 30616},
49 #{'sequence': '<s> Le camembert est un fromage de montagne!</s>', 'score': 0.04340197145938873, 'token': 2364},
50 # {'sequence': '<s> Le camembert est un fromage de Noël!</s>', 'score': 0.02823255956172943, 'token': 3236},
51 #{'sequence': '<s> Le camembert est un fromage de vache!</s>', 'score': 0.021357402205467224, 'token': 12329}]
52 ```
53
54 ##### Extract contextual embedding features from Camembert output
55 ```python
56 import torch
57 # Tokenize in sub-words with SentencePiece
58 tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
59 # ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!']
60
61 # 1-hot encode and add special starting and end tokens
62 encoded_sentence = tokenizer.encode(tokenized_sentence)
63 # [5, 221, 10, 10600, 14, 8952, 10540, 75, 1114, 6]
64 # NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
65
66 # Feed tokens to Camembert as a torch tensor (batch dim 1)
67 encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
68 embeddings, _ = camembert(encoded_sentence)
69 # embeddings.detach()
70 # embeddings.size torch.Size([1, 10, 768])
71 #tensor([[[-0.0928, 0.0506, -0.0094, ..., -0.2388, 0.1177, -0.1302],
72 # [ 0.0662, 0.1030, -0.2355, ..., -0.4224, -0.0574, -0.2802],
73 # [-0.0729, 0.0547, 0.0192, ..., -0.1743, 0.0998, -0.2677],
74 # ...,
75 ```
76
77 ##### Extract contextual embedding features from all Camembert layers
78 ```python
79 from transformers import CamembertConfig
80 # (Need to reload the model with new config)
81 config = CamembertConfig.from_pretrained("camembert/camembert-base-wikipedia-4gb", output_hidden_states=True)
82 camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb", config=config)
83
84 embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
85 # all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers)
86 all_layer_embeddings[5]
87 # layer 5 contextual embedding : size torch.Size([1, 10, 768])
88 #tensor([[[-0.0059, -0.0227, 0.0065, ..., -0.0770, 0.0369, 0.0095],
89 # [ 0.2838, -0.1531, -0.3642, ..., -0.0027, -0.8502, -0.7914],
90 # [-0.0073, -0.0338, -0.0011, ..., 0.0533, -0.0250, -0.0061],
91 # ...,
92 ```
93
94
95 ## Authors
96
97 CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
98
99
100 ## Citation
101 If you use our work, please cite:
102
103 ```bibtex
104 @inproceedings{martin2020camembert,
105 title={CamemBERT: a Tasty French Language Model},
106 author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
107 booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
108 year={2020}
109 }
110 ```
111