File size: 1,006 Bytes
a65161d a87170f c566222 a87170f f3fb012 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
---
datasets:
- jhu-clsp/jfleg
language:
- en
base_model:
- google-t5/t5-base
pipeline_tag: text2text-generation
library_name: transformers
tags:
- text-generation-inference
- grammar
---
This model is part of the [GrammarCorrector](https://github.com/akhmat-s/GrammarCorrector) tool.
"[FlanT5 from scratch for the grammar correction tool](https://medium.com/@akhmat-s/flant5-from-scratch-for-the-grammar-correction-tool-deadba9a6778)" article about how this models was trained:
>FlanT5 was trained using [JFLEG](https://arxiv.org/abs/1702.04066) dataset. The primary objective of the experiment was to develop a highly effective tool using relatively small models, minimal datasets, and constrained computational resources.
>
>To accomplish this goal, we implemented two key strategies:
>- [Perplexity-Based Data](https://arxiv.org/abs/2405.20541) Pruning With Small Reference Models.
>- A simple sampling and voting method for [multiple LLM agents](https://arxiv.org/abs/2402.05120). model was trained. |