Shijia commited on
Commit
ad430ab
1 Parent(s): 5170d2d

End of training

Browse files
Files changed (3) hide show
  1. README.md +83 -0
  2. generation_config.json +6 -0
  3. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: luqh/ClinicalT5-base
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - sem_eval_2024_task_2
7
+ metrics:
8
+ - rouge
9
+ model-index:
10
+ - name: ClinicalT5-base-finetuned-biomedical
11
+ results:
12
+ - task:
13
+ name: Sequence-to-sequence Language Modeling
14
+ type: text2text-generation
15
+ dataset:
16
+ name: sem_eval_2024_task_2
17
+ type: sem_eval_2024_task_2
18
+ config: sem_eval_2024_task_2_source
19
+ split: validation
20
+ args: sem_eval_2024_task_2_source
21
+ metrics:
22
+ - name: Rouge1
23
+ type: rouge
24
+ value: 51.0
25
+ ---
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment. -->
29
+
30
+ # ClinicalT5-base-finetuned-biomedical
31
+
32
+ This model is a fine-tuned version of [luqh/ClinicalT5-base](https://huggingface.co/luqh/ClinicalT5-base) on the sem_eval_2024_task_2 dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 0.2017
35
+ - Rouge1: 51.0
36
+ - Rouge2: 0.0
37
+ - Rougel: 51.0
38
+ - Rougelsum: 51.0
39
+ - Gen Len: 3.71
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 2e-05
59
+ - train_batch_size: 4
60
+ - eval_batch_size: 4
61
+ - seed: 42
62
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
+ - lr_scheduler_type: linear
64
+ - num_epochs: 5
65
+ - mixed_precision_training: Native AMP
66
+
67
+ ### Training results
68
+
69
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
70
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
71
+ | No log | 1.0 | 425 | 0.2227 | 49.5 | 0.0 | 49.5 | 49.5 | 3.015 |
72
+ | 1.7568 | 2.0 | 850 | 0.2053 | 49.0 | 0.0 | 49.0 | 49.0 | 3.09 |
73
+ | 0.227 | 3.0 | 1275 | 0.2012 | 51.0 | 0.0 | 51.0 | 51.0 | 3.24 |
74
+ | 0.2186 | 4.0 | 1700 | 0.2011 | 52.0 | 0.0 | 52.0 | 52.0 | 3.29 |
75
+ | 0.2173 | 5.0 | 2125 | 0.2017 | 51.0 | 0.0 | 51.0 | 51.0 | 3.71 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.35.2
81
+ - Pytorch 2.1.0+cu118
82
+ - Datasets 2.15.0
83
+ - Tokenizers 0.15.0
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "decoder_start_token_id": 0,
3
+ "eos_token_id": 1,
4
+ "pad_token_id": 0,
5
+ "transformers_version": "4.35.2"
6
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7bf8f8d5da42bab8f2066cbae385e68490eae00c83895af31fb9ced7fe9f4139
3
  size 891644712
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bae0690df248553fce57c1b7bef3c40cc19666232ec55e8217d7dc106258e31
3
  size 891644712