File size: 5,526 Bytes
e6bfa00
d11d293
e6bfa00
3f80286
 
 
 
d11d293
 
b9df8e1
d11d293
 
 
 
85bd9f8
 
 
 
 
 
460f4d6
 
d3ee465
 
 
 
b9df8e1
 
d11d293
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b9df8e1
 
d11d293
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3f80286
 
b9df8e1
d11d293
 
 
 
 
 
 
e6bfa00
 
d11d293
e6bfa00
5b67391
e6bfa00
d11d293
3f80286
 
 
e6bfa00
 
 
bc3209c
 
 
e6bfa00
 
 
70c7248
7e56eea
e6bfa00
 
 
3f80286
 
e6bfa00
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
---
license: cc-by-nc-sa-4.0
tags:
- grammar
- spelling
- punctuation
- error-correction
datasets:
- jfleg
widget:
- text: "i can has cheezburger"
  example_title: "cheezburger"
- text: "There car broke down so their hitching a ride to they're class."
  example_title: "compound-1"
- text: "so em if we have an now so with fito ringina know how to estimate the tren given the ereafte mylite trend we can also em an estimate is nod s
i again tort watfettering an we have estimated the trend an
called wot to be called sthat of exty right now we can and look at
wy this should not hare a trend i becan we just remove the trend an and we can we now estimate
tesees ona effect of them exty"
  example_title: "Transcribed Audio Example 2"
- text: "My coworker said he used a financial planner to help choose his stocks so he wouldn't loose money."
  example_title: "incorrect word choice (context)"
- text: "good so hve on an tadley i'm not able to make it to the exla session on monday this week e which is why i am e recording pre recording
an this excelleision and so to day i want e to talk about two things and first of all em i wont em wene give a summary er about
ta ohow to remove trents in these nalitives from time series"
  example_title: "lowercased audio transcription output"
- text: "Frustrated, the chairs took me forever to set up."
  example_title: "dangling modifier"
- text: "I would like a peice of pie."
  example_title: "miss-spelling"
- text: "Which part of Zurich was you going to go hiking in when we were there for the first time together? ! ?"
  example_title: "chatbot on Zurich"

parameters:
  max_length: 128
  min_length: 4
  num_beams: 4
  repetition_penalty: 1.21
  length_penalty: 1
  early_stopping: True
---
---
license: cc-by-nc-sa-4.0
tags:
- grammar
- spelling
- punctuation
- error-correction
datasets:
- jfleg
widget:
- text: "i can has cheezburger"
  example_title: "cheezburger"
- text: "There car broke down so their hitching a ride to they're class."
  example_title: "compound-1"
- text: "so em if we have an now so with fito ringina know how to estimate the tren given the ereafte mylite trend we can also em an estimate is nod s
i again tort watfettering an we have estimated the trend an
called wot to be called sthat of exty right now we can and look at
wy this should not hare a trend i becan we just remove the trend an and we can we now estimate
tesees ona effect of them exty"
  example_title: "Transcribed Audio Example 2"
- text: "My coworker said he used a financial planner to help choose his stocks so he wouldn't loose money."
  example_title: "incorrect word choice (context)"
- text: "good so hve on an tadley i'm not able to make it to the exla session on monday this week e which is why i am e recording pre recording
an this excelleision and so to day i want e to talk about two things and first of all em i wont em wene give a summary er about
ta ohow to remove trents in these nalitives from time series"
  example_title: "lowercased audio transcription output"
- text: "Frustrated, the chairs took me forever to set up."
  example_title: "dangling modifier"
- text: "I would like a peice of pie."
  example_title: "miss-spelling"
- text: "Which part of Zurich was you going to go hiking in when we were there for the first time together? ! ?"
  example_title: "chatbot on Zurich"

parameters:
  max_length: 128
  min_length: 2
  num_beams: 4
  repetition_penalty: 1.21
  length_penalty: 1
  early_stopping: True
---

> A more recent version can be found [here](https://huggingface.co/pszemraj/grammar-synthesis-large). Training smaller and/or comparably sized models is a WIP.

# t5-v1_1-base-ft-jflAUG

**GOAL:** a more robust and generalized grammar and spelling correction model that corrects everything in a single shot. It should have a minimal impact on the semantics of correct sentences (i.e. it does not change things that do not need to be changed).

- this model _(at least from preliminary testing)_ can handle large amounts of errors in the source text (i.e. from audio transcription) and still produce cohesive results. 
- a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on an expanded version of the [JFLEG dataset](https://aclanthology.org/E17-2037/).

## Model description

- this is a WIP. This fine-tuned model is v1.
- long term: a generalized grammar and spelling correction model that can handle lots of things at the same time.
- currently, it seems to be more of a "gibberish to mostly correct English" translator

## Intended uses & limitations

- try some tests with the [examples here](https://www.engvid.com/english-resource/50-common-grammar-mistakes-in-english/)
- thus far, some limitations are: sentence fragments are not autocorrected (at least, if entered individually), some more complicated pronoun/they/he/her etc. agreement is not always fixed.

## Training and evaluation data

- trained as text-to-text
- JFLEG dataset + additional selected and/or generated grammar corrections

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 5


### Framework versions

- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6