Edit model card

This is my first model for grammar error correction. It uses the jfleg dataset and is built on t5-base. It is trained only for 3 epochs so the output isn't that great.

Usage

You can use this model with the standard transformers library. This model should be small enough to run on the CPU.

$ pip install transformers torch sentencepiece

Once you have the dependencies setup, you should be able to run this model.

from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

model_name = 'vagmi/grammar-t5'

model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

text = 'fix grammar: I am work with machine to write gooder english.'
inputs = tokenizer(text, return_tensors='pt')
outputs = model.generate(inputs['input_ids'], num_beams=2, max_length=512, early_stopping=True)
fixed = tokenizer.decode(outputs[0], skip_special_tokens=True)
# I am working with machine to write better english.
Downloads last month
4
Safetensors
Model size
223M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train vagmi/grammar-t5