Edit model card

task-t3

This model is a fine-tuned version of KpRT/task-t2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3985
  • F1: 0.7669
  • Chronic Disease F1: 0.7755
  • Chronic Disease Num: 2507
  • Cancer F1: 0.7152
  • Cancer Num: 753
  • Allergy F1: 0.7833
  • Allergy Num: 271
  • Treatment F1: 0.7715
  • Treatment Num: 2963
  • Other F1: 0
  • Other Num: 0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss F1 Chronic Disease F1 Chronic Disease Num Cancer F1 Cancer Num Allergy F1 Allergy Num Treatment F1 Treatment Num Other F1 Other Num
0.236 0.3135 100 0.4006 0.7553 0.7577 2507 0.7027 753 0.7767 271 0.7650 2963 0 0
0.2141 0.6270 200 0.4293 0.7513 0.7662 2507 0.6703 753 0.7850 271 0.7577 2963 0 0
0.2483 0.9404 300 0.4024 0.7628 0.7710 2507 0.6994 753 0.7765 271 0.7712 2963 0 0
0.19 1.2539 400 0.4005 0.7666 0.7764 2507 0.7010 753 0.8069 271 0.7720 2963 0 0
0.1997 1.5674 500 0.3986 0.7698 0.7786 2507 0.7137 753 0.7946 271 0.7750 2963 0 0
0.2297 1.8809 600 0.3985 0.7669 0.7755 2507 0.7152 753 0.7833 271 0.7715 2963 0 0

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
3
Safetensors
Model size
109M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for KpRT/task-t3

Finetuned
KpRT/task-t1
Finetuned
KpRT/task-t2
Finetuned
this model