cservan commited on
Commit
6c482f5
1 Parent(s): c91f933

Add readme file

Browse files
Files changed (1) hide show
  1. README.md +136 -0
README.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: fr
3
+ license: apache-2.0
4
+ datasets:
5
+ - wikipedia
6
+ ---
7
+
8
+ # frALBERT Base
9
+
10
+ Pretrained model on French language using a masked language modeling (MLM) objective. It was introduced in
11
+ [this paper](https://arxiv.org/abs/1909.11942) and first released in
12
+ [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
13
+ between french and French.
14
+
15
+ ## Model description
16
+
17
+ frALBERT is a transformers model pretrained on 4Go of French Wikipedia in a self-supervised fashion. This means it
18
+ was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
19
+ publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
20
+ was pretrained with two objectives:
21
+
22
+ - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
23
+ the entire masked sentence through the model and has to predict the masked words. This is different from traditional
24
+ recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
25
+ GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
26
+ sentence.
27
+ - Sentence Ordering Prediction (SOP): frALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
28
+
29
+ This way, the model learns an inner representation of the English language that can then be used to extract features
30
+ useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
31
+ classifier using the features produced by the frALBERT model as inputs.
32
+
33
+ frALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
34
+
35
+ This is the first version of the base model.
36
+
37
+ This model has the following configuration:
38
+
39
+ - 12 repeating layers
40
+ - 128 embedding dimension
41
+ - 768 hidden dimension
42
+ - 12 attention heads
43
+ - 11M parameters
44
+
45
+ ## Intended uses & limitations
46
+
47
+ You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
48
+ be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=fralbert) to look for
49
+ fine-tuned versions on a task that interests you.
50
+
51
+ Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
52
+ to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
53
+ generation you should look at model like GPT2.
54
+
55
+ ### How to use
56
+
57
+ You can use this model directly with a pipeline for masked language modeling:
58
+
59
+ ```python
60
+ >>> from transformers import pipeline
61
+ >>> unmasker = pipeline('fill-mask', model='fralbert-base')
62
+ >>> unmasker("Bonjour Je suis un model [MASK] .")
63
+ ```
64
+
65
+ Here is how to use this model to get the features of a given text in PyTorch:
66
+
67
+ ```python
68
+ from transformers import AlbertTokenizer, AlbertModel
69
+ tokenizer = AlbertTokenizer.from_pretrained('fralbert-base')
70
+ model = AlbertModel.from_pretrained("fralbert-base")
71
+ text = "Remplacez-moi par le texte en français que vous souhaitez."
72
+ encoded_input = tokenizer(text, return_tensors='pt')
73
+ output = model(**encoded_input)
74
+ ```
75
+
76
+ and in TensorFlow:
77
+
78
+ ```python
79
+ from transformers import AlbertTokenizer, TFAlbertModel
80
+ tokenizer = AlbertTokenizer.from_pretrained('fralbert-base')
81
+ model = TFAlbertModel.from_pretrained("fralbert-base")
82
+ text = "Remplacez-moi par le texte en français que vous souhaitez."
83
+ encoded_input = tokenizer(text, return_tensors='tf')
84
+ output = model(encoded_input)
85
+ ```
86
+
87
+
88
+ ## Training data
89
+
90
+ The frALBERT model was pretrained on 4go of [French Wikipedia](https://fr.wikipedia.org/wiki/French_Wikipedia) (excluding lists, tables and
91
+ headers).
92
+
93
+ ## Training procedure
94
+
95
+ ### Preprocessing
96
+
97
+ The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are
98
+ then of the form:
99
+
100
+ ```
101
+ [CLS] Sentence A [SEP] Sentence B [SEP]
102
+ ```
103
+
104
+ ### Training
105
+
106
+ The frALBERT procedure follows the BERT setup.
107
+
108
+ The details of the masking procedure for each sentence are the following:
109
+ - 15% of the tokens are masked.
110
+ - In 80% of the cases, the masked tokens are replaced by `[MASK]`.
111
+ - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
112
+ - In the 10% remaining cases, the masked tokens are left as is.
113
+
114
+ ## Evaluation results
115
+
116
+ When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
117
+
118
+ | | FQuAD1.0 | PIAF_dev
119
+ |----------------|----------|----------
120
+ |frALBERT-base |72.6/55.1 |61.0 / 38.9
121
+
122
+
123
+ ### BibTeX entry and citation info
124
+
125
+ ```bibtex
126
+ @inproceedings{cattan2021fralbert,
127
+ author = {Oralie Cattan and
128
+ Christophe Servan and
129
+ Sophie Rosset},
130
+ booktitle = {Recent Advances in Natural Language Processing, RANLP 2021},
131
+ title = {{On the Usability of Transformers-based models for a French Question-Answering task}},
132
+ year = {2021},
133
+ address = {Online},
134
+ month = sep,
135
+ }
136
+ ```