File size: 6,354 Bytes
bcfd116
02d2ae0
 
 
 
bcfd116
02d2ae0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
---
language: fr
license: apache-2.0
datasets:
- wikipedia
---

# mALBERT Base Cased 64k

Pretrained multilingual language model using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert).
This model, unlike other ALBERT models, is cased: it does make a difference between french and French.

## Model description

mALBERT is a transformers model pretrained on 16Go of French Wikipedia in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:

- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
  the entire masked sentence through the model and has to predict the masked words. This is different from traditional
  recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
  GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
  sentence.
- Sentence Ordering Prediction (SOP): mALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.

This way, the model learns an inner representation of the languages that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the mALBERT model as inputs.

mALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.

This is the second version of the base model.

This model has the following configuration:

- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
- 64k of vocabulary size

## Intended uses & limitations

You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=malbert-base-cased-64k) to look for
fine-tuned versions on a task that interests you.

Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.

### How to use

Here is how to use this model to get the features of a given text in PyTorch:

```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('cservan/malbert-base-cased-64k')
model = AlbertModel.from_pretrained("cservan/malbert-base-cased-64k")
text = "Remplacez-moi par le texte en français que vous souhaitez."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```

and in TensorFlow:

```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('cservan/malbert-base-cased-64k')
model = TFAlbertModel.from_pretrained("cservan/malbert-base-cased-64k")
text = "Remplacez-moi par le texte en français que vous souhaitez."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```


## Training data

The mALBERT model was pretrained on 4go of [French Wikipedia](https://fr.wikipedia.org/wiki/French_Wikipedia) (excluding lists, tables and
headers).

## Training procedure

### Preprocessing

The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 64,000. The inputs of the model are
then of the form:

```
[CLS] Sentence A [SEP] Sentence B [SEP]
```

### Training

The mALBERT procedure follows the BERT setup.

The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.

## Evaluation results

When fine-tuned on downstream tasks, the ALBERT models achieve the following results:

Slot-filling:

|Models ⧹ Tasks |  MMNLU |  MultiATIS++ |  CoNLL2003 |  MultiCoNER |  SNIPS |  MEDIA |
|---------------|--------------|--------------|--------------|--------------|--------------|--------------|
|EnALBERT |  N/A |  N/A |  89.67 (0.34) |  42.36 (0.22) |  95.95 (0.13) |  N/A |
|FrALBERT |  N/A |  N/A |  N/A |  N/A |  N/A |  81.76 (0.59)
|mALBERT-128k |  65.81 (0.11) |  89.14 (0.15) |  88.27 (0.24) |  46.01 (0.18) |  91.60 (0.31) |  83.15 (0.38) |
|mALBERT-64k  |  65.29 (0.14) |  88.88 (0.14) |  86.44 (0.37) |  44.70 (0.27) |  90.84 (0.47) |  82.30 (0.19) |
|mALBERT-32k  |  64.83 (0.22) |  88.60 (0.27) |  84.96 (0.41) |  44.13 (0.39) |  89.89 (0.68) |  82.04 (0.28) |

Classification task:

|Models ⧹ Tasks | MMNLU | MultiATIS++ | SNIPS | SST2 |
|---------------|--------------|--------------|--------------|--------------|
|mALBERT-128k | 72.35 (0.09) | 90.58 (0.98) | 96.84 (0.49) | 34.66 (1.46) |
|mALBERT-64k  | 71.26 (0.11) | 90.97 (0.70) | 96.53 (0.44) | 34.64 (1.02) |
|mALBERT-32k  | 70.76 (0.11) | 90.55 (0.98) | 96.49 (0.45) | 34.18 (1.64) |

### BibTeX entry and citation info

```bibtex
@inproceedings{servan2024mALBERT,
  author    = {Christophe Servan and
               Sahar Ghannay and
               Sophie Rosset},
  booktitle = {the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
  title     = {{mALBERT: Is a Compact Multilingual BERT Model Still Worth It?}},
  year      = {2024},
  address   = {Torino, Italy},
  month     = may,
}
```

Link to the paper: [PDF](https://hal.science/hal-04520797)