Edit model card

text_shortening_model_v79

This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0551
  • Bert precision: 0.8947
  • Bert recall: 0.8962
  • Bert f1-score: 0.895
  • Average word count: 6.7804
  • Max word count: 16
  • Min word count: 1
  • Average token count: 10.8466
  • % shortened texts with length > 12: 1.5951

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 7e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss Bert precision Bert recall Bert f1-score Average word count Max word count Min word count Average token count % shortened texts with length > 12
2.0194 1.0 30 1.4487 0.8778 0.8746 0.8755 6.7755 16 1 10.7288 2.3313
1.58 2.0 60 1.3193 0.8835 0.8837 0.883 6.9301 16 2 10.7791 2.3313
1.4385 3.0 90 1.2492 0.8833 0.8855 0.8839 7.0368 16 2 10.9816 2.6994
1.3616 4.0 120 1.2111 0.8877 0.8873 0.887 6.8466 16 2 10.7509 1.8405
1.2976 5.0 150 1.1685 0.8869 0.8878 0.8868 6.8564 17 2 10.8172 1.8405
1.2495 6.0 180 1.1559 0.8885 0.8895 0.8885 6.8577 16 2 10.8564 2.0859
1.201 7.0 210 1.1353 0.8889 0.891 0.8894 6.9521 16 2 11.0012 2.3313
1.1717 8.0 240 1.1164 0.8892 0.89 0.8891 6.8601 16 1 10.8933 2.0859
1.1352 9.0 270 1.1110 0.8902 0.8891 0.8891 6.708 16 1 10.7436 1.1043
1.0984 10.0 300 1.1037 0.8901 0.8909 0.8901 6.8233 17 1 10.8503 1.9632
1.0745 11.0 330 1.0937 0.8894 0.892 0.8902 6.9362 17 2 10.9742 2.3313
1.0509 12.0 360 1.0907 0.8911 0.8916 0.8908 6.8233 17 1 10.8564 1.9632
1.0269 13.0 390 1.0805 0.8906 0.8934 0.8915 6.9448 17 1 11.0135 2.2086
1.0126 14.0 420 1.0784 0.8912 0.8935 0.8919 6.9264 17 2 10.973 2.3313
0.9959 15.0 450 1.0725 0.8929 0.8944 0.8932 6.8294 17 1 10.8957 2.2086
0.9717 16.0 480 1.0715 0.8916 0.8941 0.8924 6.919 17 1 10.9963 2.0859
0.9552 17.0 510 1.0727 0.8935 0.8949 0.8937 6.8282 17 1 10.9055 1.9632
0.9461 18.0 540 1.0665 0.8947 0.8955 0.8947 6.8061 17 1 10.8613 1.5951
0.926 19.0 570 1.0664 0.8948 0.896 0.895 6.7853 16 1 10.8515 1.3497
0.9192 20.0 600 1.0636 0.8948 0.8953 0.8946 6.7718 16 1 10.8209 1.4724
0.9101 21.0 630 1.0581 0.8954 0.897 0.8957 6.8221 16 1 10.8724 1.5951
0.899 22.0 660 1.0599 0.8954 0.8974 0.8959 6.8405 16 1 10.8982 1.5951
0.8843 23.0 690 1.0586 0.8943 0.8962 0.8948 6.8393 17 2 10.9055 1.9632
0.8779 24.0 720 1.0572 0.8932 0.8961 0.8942 6.8736 17 2 10.9656 2.0859
0.8725 25.0 750 1.0573 0.8939 0.8963 0.8947 6.8098 16 2 10.9104 1.7178
0.8567 26.0 780 1.0591 0.8951 0.8968 0.8955 6.7926 17 1 10.8945 1.5951
0.8549 27.0 810 1.0577 0.8945 0.8962 0.8948 6.8135 17 1 10.9018 1.8405
0.8467 28.0 840 1.0570 0.8948 0.8961 0.895 6.7669 16 1 10.8405 1.4724
0.833 29.0 870 1.0577 0.895 0.896 0.895 6.7546 16 1 10.8294 1.3497
0.8284 30.0 900 1.0548 0.8942 0.8957 0.8945 6.7816 16 1 10.8589 1.4724
0.8296 31.0 930 1.0565 0.8947 0.8967 0.8952 6.8037 16 1 10.8982 1.4724
0.8156 32.0 960 1.0550 0.8945 0.8961 0.8948 6.7914 16 2 10.8601 1.5951
0.8095 33.0 990 1.0567 0.8944 0.8962 0.8948 6.8049 16 2 10.881 1.7178
0.8066 34.0 1020 1.0564 0.8948 0.8961 0.895 6.7853 16 1 10.8405 1.8405
0.817 35.0 1050 1.0567 0.8951 0.8961 0.8952 6.7509 16 1 10.8172 1.5951
0.8155 36.0 1080 1.0563 0.8949 0.8964 0.8952 6.7669 16 1 10.838 1.5951
0.808 37.0 1110 1.0560 0.8946 0.8965 0.8951 6.7926 16 1 10.8675 1.7178
0.8049 38.0 1140 1.0554 0.895 0.8965 0.8953 6.7742 16 1 10.8393 1.4724
0.8002 39.0 1170 1.0550 0.8946 0.8962 0.8949 6.7877 16 1 10.8491 1.5951
0.7912 40.0 1200 1.0551 0.8947 0.8962 0.895 6.7804 16 1 10.8466 1.5951

Framework versions

  • Transformers 4.33.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
3
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ldos/text_shortening_model_v79

Base model

google-t5/t5-small
Finetuned
(1529)
this model