File size: 2,761 Bytes
c1a304e
 
bacda33
 
 
 
 
 
 
 
 
c1a304e
bacda33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
license: apache-2.0
language:
- it
widget:
- text: "una fantastica [MASK] di #calcio! grande prestazione del mister e della squadra"
  example_title: "Example 1"
- text: "il governo [MASK] dovrebbe fare politica, non soltanto propaganda! #vergogna"
  example_title: "Example 2"
- text: "che serata da sogno sul #redcarpet! grazie a tutti gli attori e registi del [MASK] italiano #oscar #awards"
  example_title: "Example 3"
---

--------------------------------------------------------------------------------------------------

<body>
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"></span>
<br>
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">  </span>
<br>
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: BERT-TWEET</span>
<br>
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span>
<br>
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span>
<br>
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"></span>
</body>

--------------------------------------------------------------------------------------------------

<h3>Model description</h3>

This is a <b>BERT</b> <b>[1]</b> uncased model for the <b>Italian</b> language, obtained using <b>TwHIN-BERT</b> <b>[2]</b> ([twhin-bert-base](https://huggingface.co/Twitter/twhin-bert-base)) as a starting point and focusing it on the Italian language by modifying the embedding layer 
(as in <b>[3]</b>, computing document-level frequencies over the <b>Wikipedia</b> dataset)

The resulting model has 110M parameters, a vocabulary of 30.520 tokens, and a size of ~440 MB.

<h3>Quick usage</h3>

```python
from transformers import BertTokenizerFast, BertModel

tokenizer = BertTokenizerFast.from_pretrained("osiria/bert-tweet-base-italian-uncased")
model = BertModel.from_pretrained("osiria/bert-tweet-base-italian-uncased")
```

Here you can find the find the model already fine-tuned on Sentiment Analysis: https://huggingface.co/osiria/bert-tweet-italian-uncased-sentiment

<h3>References</h3>

[1] https://arxiv.org/abs/1810.04805

[2] https://arxiv.org/abs/2209.07562

[3] https://arxiv.org/abs/2010.05609

<h3>Limitations</h3>

This model was trained on tweets, so it's mainly suitable for general-purpose social media text processing, involving short texts written in a social network style. 
It might show limitations when it comes to longer and more structured text, or domain-specific text.

<h3>License</h3>

The model is released under <b>Apache-2.0</b> license