# Model Description: To create t5-base-c4jfleg model, T5-base model is fine-tuned on the [**JFLEG dataset**](https://huggingface.co/datasets/jfleg) and [**C4 200M dataset**](https://huggingface.co/datasets/liweili/c4_200m) by taking around 3000 examples from each with the objective of grammar correction. The original Google's [**T5-base**] model was pre-trained on [**C4 dataset**](https://huggingface.co/datasets/c4). The T5 model was presented in [**Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer**](https://arxiv.org/pdf/1910.10683.pdf) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. ## Usage : ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("team-writing-assistant/t5-base-c4jfleg") model = AutoModelForSeq2SeqLM.from_pretrained("team-writing-assistant/t5-base-c4jfleg") ``` ## Examples : Input: My grammar are bad. Output: My grammar is bad. Input: Speed of light is fastest than speed of sound Output: Speed of light is faster than speed of sound.