T5-Efficient-TINY for grammar correction
This is a T5-Efficient-TINY model that was trained on a subset of C4_200M dataset to solve the grammar correction task in English. To bring additional errors, random typos were introduced to the input sentences using the nlpaug library. Since the model was trained on only one task, there are no prefixes needed.
The model was trained as a part of the project during the Full Stack Deep Learning course. ONNX version of the model is deployed on the site and can be run directly in the browser.
- Downloads last month
- 107
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.