pszemraj's picture
Update README.md
9c4a4b4
|
raw
history blame
4.3 kB
---
languages:
- en
license:
- cc-by-nc-sa-4.0
- apache-2.0
tags:
- grammar
- spelling
- punctuation
- error-correction
- grammar synthesis
- FLAN
datasets:
- jfleg
widget:
- text: "There car broke down so their hitching a ride to they're class."
example_title: "compound-1"
- text: "i can has cheezburger"
example_title: "cheezburger"
- text: "so em if we have an now so with fito ringina know how to estimate the tren given the ereafte mylite trend we can also em an estimate is nod s
i again tort watfettering an we have estimated the trend an
called wot to be called sthat of exty right now we can and look at
wy this should not hare a trend i becan we just remove the trend an and we can we now estimate
tesees ona effect of them exty"
example_title: "Transcribed Audio Example 2"
- text: "My coworker said he used a financial planner to help choose his stocks so he wouldn't loose money."
example_title: "incorrect word choice (context)"
- text: "good so hve on an tadley i'm not able to make it to the exla session on monday this week e which is why i am e recording pre recording
an this excelleision and so to day i want e to talk about two things and first of all em i wont em wene give a summary er about
ta ohow to remove trents in these nalitives from time series"
example_title: "lowercased audio transcription output"
- text: "Frustrated, the chairs took me forever to set up."
example_title: "dangling modifier"
- text: "I would like a peice of pie."
example_title: "miss-spelling"
- text: "Which part of Zurich was you going to go hiking in when we were there for the first time together? ! ?"
example_title: "chatbot on Zurich"
- text: "Most of the course is about semantic or content of language but there are also interesting topics to be learned from the servicefeatures except statistics in characters in documents. At this point, Elvthos introduces himself as his native English speaker and goes on to say that if you continue to work on social scnce,"
example_title: "social science ASR summary output"
- text: "they are somewhat nearby right yes please i'm not sure how the innish is tepen thut mayyouselect one that istatte lo variants in their property e ere interested and anyone basical e may be applyind reaching the browing approach were"
- "medical course audio transcription"
parameters:
max_length: 128
min_length: 4
num_beams: 8
repetition_penalty: 1.21
length_penalty: 1
early_stopping: True
---
# grammar-synthesis: flan-t5-xl
<a href="https://colab.research.google.com/gist/pszemraj/43fc6a5c5acd94a3d064384dd1f3654c/demo-flan-t5-xl-grammar-synthesis.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
This model is a fine-tuned version of [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) on an extended version of the `JFLEG` dataset.
![ex](https://i.imgur.com/zACakst.png)
<center>note: as this model is on the larger side, the inference API may be cut off</center>
## Model description
The intent is to create a text2text language model that successfully performs "single-shot grammar correction" on a potentially grammatically incorrect text **that could have many errors** with the important qualifier that **it does not semantically change text/information that IS grammatically correct.**.
Compare some of the more severe error examples on [other grammar correction models](https://huggingface.co/models?dataset=dataset:jfleg) to see the difference :)
## Limitations
- Data set: `cc-by-nc-sa-4.0`
- Model: `apache-2.0`
- currently **a work in progress**! While probably useful for "single-shot grammar correction" in many cases, **check the output for correctness, ok?**.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
#### Session One
- TODO: add this. It was a single epoch at higher LR
#### Session Two
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 2.0