license: cc-by-nc-sa-4.0
tags:
- grammar
- spelling
- punctuation
- error-correction
- grammar synthesis
datasets:
- jfleg
widget:
- text: i can has cheezburger
example_title: cheezburger
- text: There car broke down so their hitching a ride to they're class.
example_title: compound-1
- text: >-
so em if we have an now so with fito ringina know how to estimate the tren
given the ereafte mylite trend we can also em an estimate is nod s i again
tort watfettering an we have estimated the trend an called wot to be
called sthat of exty right now we can and look at wy this should not hare
a trend i becan we just remove the trend an and we can we now estimate
tesees ona effect of them exty
example_title: Transcribed Audio Example 2
- text: >-
My coworker said he used a financial planner to help choose his stocks so
he wouldn't loose money.
example_title: incorrect word choice (context)
- text: >-
good so hve on an tadley i'm not able to make it to the exla session on
monday this week e which is why i am e recording pre recording an this
excelleision and so to day i want e to talk about two things and first of
all em i wont em wene give a summary er about ta ohow to remove trents in
these nalitives from time series
example_title: lowercased audio transcription output
- text: >-
Semo eaxmeslp of bda gmaramr ttah occru deu to nounprnooun ageremten
errrso inlceud Anan adn Pat aer mairred he has bnee togethre fro 20 yaesr
Anna and Pta aer plraul wheil he is sniurgla Teh sentecne suhold rdea Aann
adn Pat are mraried tyhe heav
example_title: descramble unintelligible text
- text: >-
Most of the course is about semantic or content of language but there are
also interesting topics to be learned from the servicefeatures except
statistics in characters in documents. At this point, Elvthos introduces
himself as his native English speaker and goes on to say that if you
continue to work on social scnce,
example_title: social science ASR summary output
parameters:
max_length: 128
min_length: 2
num_beams: 8
repetition_penalty: 1.5
length_penalty: 0.95
early_stopping: true
grammar-synthesis-base (beta)
a fine-tuned version of google/t5-base-lm-adapt for grammar correction on an expanded version of the JFLEG dataset. Check out a demo notebook on Colab here.
usage in Python (after pip install transformers
):
from transformers import pipeline
corrector = pipeline(
'text2text-generation',
'pszemraj/grammar-synthesis-base',
)
raw_text = 'i can has cheezburger'
results = corrector(raw_text)
print(results)
Model description
The intent is to create a text2text language model that successfully completes "single-shot grammar correction" on a potentially grammatically incorrect text that could have a lot of mistakes with the important qualifier of it does not semantically change text/information that IS grammatically correct.
Compare some of the heavier-error examples on other grammar correction models to see the difference :)
Limitations
- dataset:
cc-by-nc-sa-4.0
- model:
apache-2.0
- this is still a work-in-progress and while probably useful for "single-shot grammar correction" in a lot of cases, give the outputs a glance for correctness ok?
Use Cases
Obviously, this section is quite general as there are many things one can use "general single-shot grammar correction" for. Some ideas or use cases:
- Correcting highly error-prone LM outputs. Some examples would be audio transcription (ASR) (this is literally some of the examples) or something like handwriting OCR.
- To be investigated further, depending on what model/system is used it might be worth it to apply this after OCR on typed characters.
- Correcting/infilling text generated by text generation models to be cohesive/remove obvious errors that break the conversation immersion. I use this on the outputs of this OPT 2.7B chatbot-esque model of myself.
An example of this model running on CPU with beam search:
original response:
ive heard it attributed to a bunch of different philosophical schools, including stoicism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to
synthesizing took 306.12 seconds
Final response in 1294.857 s:
I've heard it attributed to a bunch of different philosophical schools, including solipsism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to speak)
Note: that I have some other logic that removes any periods at the end of the final sentence in this chatbot setting to avoid coming off as passive aggressive
- Somewhat related to #2 above, fixing/correcting so-called tortured-phrases that are dead giveaways text was generated by a language model. Note that SOME of these are not fixed, especially as they venture into domain-specific terminology (i.e. irregular timberland instead of Random Forest).
Training and evaluation data
More information needed 😉
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 2
Training results
Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1