MRNH commited on
Commit
eaa75c2
1 Parent(s): d97439f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -23,12 +23,16 @@ To generate text using the model:
23
 
24
 
25
  tokenizer = MBart50TokenizerFast.from_pretrained("MRNH/mbart-english-grammar-corrector", src_lang="en_XX", tgt_lang="en_XX")
26
- input = tokenizer("I was here yesterday to studying",text_target="I was here yesterday to study", return_tensors='pt')
27
- output = model.generate(input["input_ids"],attention_mask=input["attention_mask"],forced_bos_token_id=tokenizer_it.lang_code_to_id["en_XX"])
 
 
 
 
28
 
29
 
30
- Training of the model is performed using the following loss computatation:
31
 
32
- hidden_state.logits, hidden_state.loss = model(input_ids=input["input_ids"],
33
  attention_mask=input["attention_mask"],
34
  labels=input["labels"])
 
23
 
24
 
25
  tokenizer = MBart50TokenizerFast.from_pretrained("MRNH/mbart-english-grammar-corrector", src_lang="en_XX", tgt_lang="en_XX")
26
+
27
+ input = tokenizer("I was here yesterday to studying",
28
+ text_target="I was here yesterday to study", return_tensors='pt')
29
+
30
+ output = model.generate(input["input_ids"],attention_mask=input["attention_mask"],
31
+ forced_bos_token_id=tokenizer_it.lang_code_to_id["en_XX"])
32
 
33
 
34
+ Training of the model is performed using the following loss computation based on the hidden state output h:
35
 
36
+ h.logits, h.loss = model(input_ids=input["input_ids"],
37
  attention_mask=input["attention_mask"],
38
  labels=input["labels"])