coedit-small / README.md
jbochi's picture
Change license to match the dataset license
6ce9822
metadata
license: apache-2.0
base_model: google/flan-t5-small
datasets:
  - grammarly/coedit
tags:
  - generated_from_trainer
  - text-generation-inference
metrics:
  - rouge
model-index:
  - name: coedit-small
    results: []
language:
  - en
widget:
  - text: >-
      Fix the grammar: When I grow up, I start to understand what he said is
      quite right.
    example_title: Fluency
  - text: >-
      Make this text coherent: Their flight is weak. They run quickly through
      the tree canopy.
    example_title: Coherence
  - text: >-
      Rewrite to make this easier to understand: A storm surge is what
      forecasters consider a hurricane's most treacherous aspect.
    example_title: Simplification
  - text: 'Paraphrase this: Do you know where I was born?'
    example_title: Paraphrase
  - text: >-
      Write this more formally: omg i love that song im listening to it right
      now
    example_title: Formalize
  - text: 'Write in a more neutral way: The authors'' exposé on nutrition studies.'
    example_title: Neutralize

coedit-small

This model is a fine-tuned version of google/flan-t5-small on the CoEdIT dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8242
  • Rouge1: 58.7504
  • Rouge2: 45.1374
  • Rougel: 55.4161
  • Rougelsum: 55.4599
  • Gen Len: 16.5245

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
0.9482 1.0 4317 0.8878 58.4501 44.2623 54.4468 54.51 16.5088
0.9155 2.0 8634 0.8485 58.6609 44.7759 54.9844 55.0503 16.5339
0.8964 3.0 12951 0.8402 58.712 44.9838 55.2171 55.2697 16.5251
0.9049 4.0 17268 0.8305 58.7767 45.1325 55.3955 55.4522 16.5181
0.8948 5.0 21585 0.8242 58.7504 45.1374 55.4161 55.4599 16.5245

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.7
  • Tokenizers 0.15.0