File size: 1,264 Bytes
27f635d
 
 
7f95bcb
b70db88
 
 
 
 
 
634e268
5aedc32
4d863f3
7b366ee
5aedc32
 
dc91eb2
2e9d7dd
097153f
dc91eb2
5aedc32
 
 
dc91eb2
097153f
66121d4
928b3e8
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
language:
- it
pipeline_tag: text2text-generation
metrics:
- f1
tags:
- grammatical error correction
- GEC
- italian
---

This is a fine-tuned version of Multilingual Bart trained on Italian in particular on the public dataset MERLIN for Grammatical Error Correction.

To initialize the model:

    
    from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
    model = MBartForConditionalGeneration.from_pretrained("MRNH/mbart-italian-grammar-corrector")
    
    
To generate text using the model:

    
    tokenizer = MBart50TokenizerFast.from_pretrained("MRNH/mbart-italian-grammar-corrector", src_lang="it_IT", tgt_lang="it_IT")
    input = tokenizer("I was here yesterday to studying",text_target="I was here yesterday to study", return_tensors='pt')
    output = model.generate(input["input_ids"],attention_mask=input["attention_mask"],forced_bos_token_id=tokenizer_it.lang_code_to_id["it_IT"])



Training of the model is performed using the following loss computation based on the hidden state output h:

    h.logits, h.loss = model(input_ids=input["input_ids"],
                                                  attention_mask=input["attention_mask"],
                                                  labels=input["labels"])