File size: 1,380 Bytes
7fb6381
 
 
 
 
 
 
3423b96
7fb6381
 
 
 
 
 
 
 
 
3423b96
7fb6381
 
 
 
 
 
 
 
 
3423b96
7fb6381
 
 
 
 
 
680b324
7fb6381
 
 
 
 
 
680b324
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
language:
- it
---
## GPT-ita-fdi_lega🇮🇹


Finetune of an Italian version of gpt-2 ([GePpeTto](https://huggingface.co/LorenzoDeMattei/GePpeTto)) trained on tweets of politicians from the far right Italian parties FDI and Lega. 


## Finetuning corpus

The model was finetuned over a private dataset of tweets from italian politicians. The tweets were collected between 2021 and 2022 from the Twitter accounts of all the "FDI" and "Lega" members of the Italian Parliament.
In the end, the finetuning was conducted over a corpus of ~40K tweets 

## Uses

By giving the model a few Italian words to start from, the model can generate a tweet in the style of far right Italian politicians. Try it out [here](https://huggingface.co/spaces/ruggsea/demo_gpt-ita-fdi_lega)


## Bias, Risks, and Limitations

Compared to the base italian gpt-2 model, this model could generate more hateful or toxic content and exhibit bias, in line with the training corpus.

### Recommendations


Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. 

## How to Get Started with the Model

Use the code below to get started with the model.


```
from transformers import GPT2Tokenizer, GPT2Model

model = GPT2Model.from_pretrained('ruggsea/gpt-ita-fdi_lega')
tokenizer = GPT2Tokenizer.from_pretrained(
    'ruggsea/gpt-ita-fdi_lega',
)
```