File size: 5,305 Bytes
c6b9145
 
93fd74e
 
 
 
 
 
 
 
 
 
 
c6b9145
93fd74e
 
49476a6
 
 
 
 
 
 
 
 
 
 
 
 
93fd74e
 
 
 
 
9b40318
ab1a0b1
93fd74e
 
 
 
fdbca53
 
93fd74e
 
 
fdbca53
93fd74e
fdbca53
93fd74e
 
fdbca53
93fd74e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fdbca53
93fd74e
 
 
 
fdbca53
93fd74e
 
 
 
4eaa00b
93fd74e
 
 
4eaa00b
93fd74e
 
 
 
 
 
fdbca53
e7e0502
93fd74e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
license: apache-2.0
datasets:
- wikipedia
language:
- it
widget:
- text: "milano è una [MASK] dell'italia"
  example_title: "Example 1"
- text: "il sole è una [MASK] della via lattea"
  example_title: "Example 2"
- text: "l'italia è una [MASK] dell'unione europea"
  example_title: "Example 3"
---
--------------------------------------------------------------------------------------------------

<body>
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"></span>
<br>
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">  </span>
<br>
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: BLAZE 🔥</span>
<br>
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span>
<br>
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span>
<br>
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"></span>
</body>

--------------------------------------------------------------------------------------------------

<h3>Introduction</h3>

This model is a <b>lightweight</b> and uncased version of <b>BERT</b> <b>[1]</b> for the <b>italian</b> language. Its <b>55M parameters</b> and <b>220MB</b> size make it
<b>50% lighter</b> than a typical mono-lingual BERT model. It is ideal when memory consumption and execution speed are critical while maintaining high-quality results.


<h3>Model description</h3>

The model builds on the multilingual <b>DistilBERT</b> <b>[2]</b> model (from the HuggingFace team: [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased)) as a starting point, 
focusing it on the italian language while at the same time turning it into an uncased model by modifying the embedding layer 
(as in <b>[3]</b>, but computing document-level frequencies over the <b>Wikipedia</b> dataset and setting a frequency threshold of 0.1%), which brings a considerable
reduction in the number of parameters.

To compensate for the deletion of cased tokens, which now forces the model to exploit lowercase representations of words previously capitalized, 
the model has been further pre-trained on the italian split of the [Wikipedia](https://huggingface.co/datasets/wikipedia) dataset, using the <b>whole word masking [4]</b> technique to make it more robust 
to the new uncased representations.

The resulting model has 55M parameters, a vocabulary of 13.832 tokens, and a size of 220MB, which makes it <b>50% lighter</b> than a typical mono-lingual BERT model and
20% lighter than a standard mono-lingual DistilBERT model.


<h3>Training procedure</h3>

The model has been trained for <b>masked language modeling</b> on the italian <b>Wikipedia</b> (~3GB) dataset for 10K steps, using the AdamW optimizer, with a batch size of 512 
(obtained through 128 gradient accumulation steps),
a sequence length of 512, and a linearly decaying learning rate starting from 5e-5. The training has been performed using <b>dynamic masking</b> between epochs and
exploiting the <b>whole word masking</b> technique.


<h3>Performances</h3>

The following metrics have been computed on the Part of Speech Tagging and Named Entity Recognition tasks, using the <b>UD Italian ISDT</b> and <b>WikiNER</b> datasets, respectively. 
The PoST model has been trained for 5 epochs, and the NER model for 3 epochs, both with a constant learning rate, fixed at 1e-5. For Part of Speech Tagging, the metrics have been computed on the default test set
provided with the dataset, while for Named Entity Recognition the metrics have been computed with a 5-fold cross-validation

| Task | Recall | Precision | F1 |
| ------ | ------ | ------ |  ------ |
| Part of Speech Tagging | 97.48  | 97.29 | 97.37 |
| Named Entity Recognition | 89.29  | 89.84 | 89.53 |

The metrics have been computed at the token level and macro-averaged over the classes.


<h3>Demo</h3>

You can try the model online (fine-tuned on named entity recognition) using this web app: https://huggingface.co/spaces/osiria/next-it-demo

<h3>Quick usage</h3>

```python
from transformers import AutoTokenizer, DistilBertForMaskedLM
from transformers import pipeline

tokenizer = AutoTokenizer.from_pretrained("osiria/blaze-it")
model = DistilBertForMaskedLM.from_pretrained("osiria/blaze-it")
pipeline_mlm = pipeline(task="fill-mask", model=model, tokenizer=tokenizer)
```


<h3>Limitations</h3>

This lightweight model is mainly trained on Wikipedia, so it's particularly suitable as an agile analyzer for large volumes of natively digital text 
from the world wide web, written in a correct and fluent form (like wikis, web pages, news, etc.). However, it may show limitations when it comes to chaotic text, containing errors and slang expressions
(like social media posts) or when it comes to domain-specific text (like medical, financial or legal content).

<h3>References</h3>

[1] https://arxiv.org/abs/1810.04805

[2] https://arxiv.org/abs/1910.01108

[3] https://arxiv.org/abs/2010.05609

[4] https://arxiv.org/abs/1906.08101

<h3>License</h3>

The model is released under <b>Apache-2.0</b> license