Edit model card

Model

This model utilizes the Flan-T5-base pre-trained model and has been fine-tuned using the JFLEG dataset with the assistance of the Happy Transformer framework. Its primary objective is to correct a wide range of potential grammatical errors that sentences might contain including issues with punctuation, typos, prepositions, and more.

Usage with Transformers

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("Sajid030/t5-base-grammar-synthesis")
model = AutoModelForSeq2SeqLM.from_pretrained("Sajid030/t5-base-grammar-synthesis")

text = "One person if do n't have good health that means so many things they could lost ."
inputs = tokenizer("grammar:"+text, truncation=True, return_tensors='pt')

output = model.generate(inputs['input_ids'])
correction=tokenizer.batch_decode(output, skip_special_tokens=True)
print("".join(correction)) #Correction: If one person doesn't have good health, so many things could be lost.

Usage with HappyTransformers

from happytransformer import HappyTextToText, TTSettings
happy_tt = HappyTextToText("T5", "Sajid030/t5-base-grammar-synthesis")
args = TTSettings()

sentence = "Much many brands and sellers still in the market."
result = happy_tt.generate_text("grammar: "+ sentence, args=args)

print(result.text) # Many brands and sellers are still in the market.
Downloads last month
10
Safetensors
Model size
248M params
Tensor type
F32
·

Dataset used to train Sajid030/t5-base-grammar-synthesis