File size: 2,401 Bytes
9ac4303
 
b0687c9
 
 
 
d3561f1
 
ec58cea
c7e52e2
0b3ee43
 
 
d3561f1
 
 
72f31af
d3561f1
 
 
 
 
 
 
 
565bdef
74ea0d0
d3561f1
74ea0d0
d3561f1
 
 
 
 
 
 
 
 
 
565bdef
d3561f1
 
74ea0d0
d3561f1
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
license: afl-3.0
language:
- it
widget:
- text: Gli sviluppi delle prestazioni rivalutate e del valore di [MASK] sono di seguito riportati
---

# BureauBERTo: adapting UmBERTo to the Italian bureaucratic language
<img  src="https://huggingface.co/colinglab/BureauBERTo/resolve/main/bureauberto.jpg?raw=true" width="600"/> 



BureauBERTo is the first transformer-based language model adapted to the Italian Public Administration (PA) and technical-bureaucratic domains. This model results from a further pre-training applied to the general-purpose Italian model UmBERTo.

## Training Corpus
BureauBERTo is trained on the Bureau Corpus, a composite corpus containing PA, banking, and insurance documents. The Bureau Corpus contains 35,293,226 sentences and approximately 1B tokens, for a total amount of 6.7 GB of plain text. The input dataset is constructed by applying the BureauBERTo tokenizer to contiguous sentences from one or more documents, using the separating special token after each sentence. The BureauBERTo vocabulary is expanded with 8,305 domain-specific tokens extracted from the Bureau Corpus.

## Training Procedure

The further pre-training is applied with a MLM objective (randomly masking 15\% of the tokens) on the Bureau Corpus. The model was trained for 40 epochs, resulting in 17,400 steps with a batch size of 8K on a NVIDIA A100 GPU. We used a learning rate of 5e-5 along with an Adam optimizer (β1=0.9, β2 = 0.98) with a weight decay of 0.1, and a 0.06 warm-up steps ratio.


## BureauBERTo model can be loaded like:

```python
from transformers import AutoModel, AutoTokenizer
model_name = "colinglab/BureauBERTo"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```

## Citation
If you find our resource or paper useful, please consider including the following citation in your paper.

```
@inproceedings{auriemma2023bureauberto,
address = {Pisa, Italy},
series = {{CEUR} {Workshop} {Proceedings}},
title = {{BureauBERTo}: adapting {UmBERTo} to the {Italian} bureaucratic language},
shorttitle = {{BureauBERTo}},
language = {en},
booktitle = {In Atti del convegno Ital-IA 2023, Workshop AI per la Pubblica Amministrazione},
publisher = {CEUR},
author = {Auriemma, Serena and Madeddu, Mauro and Miliani, Martina and Bondielli, Alessandro and Passaro, Lucia C and Lenci, Alessandro},
month = sep,
year = {2023}
}
```