lorenzoscottb's picture
Update README.md
973bdf0 verified
metadata
license: apache-2.0
tags:
  - generated_from_trainer
metrics:
  - rouge
model-index:
  - name: t5-base-DreamBank-Generation-Char
    results: []
language:
  - en
widget:
  - text: >-
      I'm in an auditorium. Susie S is concerned at her part in this disability
      awareness spoof we are preparing. I ask, 'Why not do it? Lots of AB's
      represent us in a patronizing way. Why shouldn't we represent ourselves in
      a good, funny way?' I watch the video we all made. It is funny. I try to
      sit on a folding chair. Some guy in front talks to me. Merle is in the
      audience somewhere. [BL]

t5-base-DreamBank-Generation-Char

This model is a fine-tuned version of t5-base on the DB emotion classification. It achieves the following results on the evaluation set (please note they refer to best uploaded model):

  • Loss: 0.3047
  • Rouge1: 0.8609
  • Rouge2: 0.7956
  • Rougel: 0.8476
  • Rougelsum: 0.8578

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
No log 1.0 24 0.4863 0.7670 0.6655 0.7575 0.7634
No log 2.0 48 0.4284 0.6870 0.5207 0.6846 0.6875
No log 3.0 72 0.3541 0.7659 0.6742 0.7600 0.7625
No log 4.0 96 0.3211 0.8147 0.7251 0.7965 0.8078
No log 5.0 120 0.3103 0.8400 0.7747 0.8313 0.8371
No log 6.0 144 0.3220 0.8538 0.7867 0.8285 0.8515
No log 7.0 168 0.3047 0.8609 0.7956 0.8476 0.8578
No log 8.0 192 0.3106 0.8574 0.7836 0.8401 0.8509
No log 9.0 216 0.3054 0.8532 0.7857 0.8378 0.8481
No log 10.0 240 0.3136 0.8455 0.7789 0.8282 0.8432

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.12.1
  • Datasets 2.5.1
  • Tokenizers 0.12.1

Cite

Should you use our models in your work, please consider citing us as:

@article{BERTOLINI2024406,
title = {DReAMy: a library for the automatic analysis and annotation of dream reports with multilingual large language models},
journal = {Sleep Medicine},
volume = {115},
pages = {406-407},
year = {2024},
note = {Abstracts from the 17th World Sleep Congress},
issn = {1389-9457},
doi = {https://doi.org/10.1016/j.sleep.2023.11.1092},
url = {https://www.sciencedirect.com/science/article/pii/S1389945723015186},
author = {L. Bertolini and A. Michalak and J. Weeds}
}