Update README.md
Browse files
README.md
CHANGED
@@ -90,7 +90,7 @@ print(results)
|
|
90 |
|
91 |
**For Batch Inference:** see [this discussion thread](https://huggingface.co/pszemraj/flan-t5-large-grammar-synthesis/discussions/1) for details, but essentially the dataset consists of several sentences at a time, and so I'd recommend running inference **in the same fashion:** batches of 64-96 tokens ish (or, 2-3 sentences split with regex)
|
92 |
|
93 |
-
- it is also helpful to **first** check whether or not a given sentence needs grammar correction before using the text2text model. You can do this with BERT-type models fine-tuned on CoLA like `
|
94 |
- I made a notebook demonstrating batch inference [here](https://colab.research.google.com/gist/pszemraj/6e961b08970f98479511bb1e17cdb4f0/batch-grammar-check-correct-demo.ipynb)
|
95 |
|
96 |
|
|
|
90 |
|
91 |
**For Batch Inference:** see [this discussion thread](https://huggingface.co/pszemraj/flan-t5-large-grammar-synthesis/discussions/1) for details, but essentially the dataset consists of several sentences at a time, and so I'd recommend running inference **in the same fashion:** batches of 64-96 tokens ish (or, 2-3 sentences split with regex)
|
92 |
|
93 |
+
- it is also helpful to **first** check whether or not a given sentence needs grammar correction before using the text2text model. You can do this with BERT-type models fine-tuned on CoLA like `textattack/roberta-base-CoLA`
|
94 |
- I made a notebook demonstrating batch inference [here](https://colab.research.google.com/gist/pszemraj/6e961b08970f98479511bb1e17cdb4f0/batch-grammar-check-correct-demo.ipynb)
|
95 |
|
96 |
|