File size: 1,345 Bytes
7d2fe2d
 
 
a0ae003
7d2fe2d
 
 
 
 
e4b8fe3
7d2fe2d
 
f7e03b9
7d2fe2d
 
 
 
 
2c93bf4
7d2fe2d
 
1f33647
7d2fe2d
 
2c93bf4
eaa75c2
 
 
1f33647
 
 
eaa75c2
 
87d0129
 
eaa75c2
87d0129
eaa75c2
d97439f
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
language:
- en
pipeline_tag: text2text-generation
metrics:
- f1
tags:
- grammatical error correction
- GEC
- english
---

This is a fine-tuned version of Multilingual Bart trained (610M) on English in particular on the public dataset FCE for Grammatical Error Correction.

To initialize the model:

    
    from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
    model = MBartForConditionalGeneration.from_pretrained("MRNH/mbart-english-grammar-corrector")
    
    
Use the tokenizer:

    
    tokenizer = MBart50TokenizerFast.from_pretrained("MRNH/mbart-english-grammar-corrector", src_lang="en_XX", tgt_lang="en_XX")
    
    input = tokenizer("I was here yesterday to studying",
                      text_target="I was here yesterday to study", return_tensors='pt')

To generate text using the model:

    output = model.generate(input["input_ids"],attention_mask=input["attention_mask"],
                            forced_bos_token_id=tokenizer_it.lang_code_to_id["en_XX"])


Training of the model is performed using the following loss computation based on the hidden state output h:

    h.logits, h.loss = model(input_ids=input["input_ids"],
                                                  attention_mask=input["attention_mask"],
                                                  labels=input["labels"])