tanatapanun commited on
Commit
555c5e9
1 Parent(s): 7122776

Model save

Browse files
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: GanjinZero/biobart-v2-base
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - rouge
8
+ model-index:
9
+ - name: fine-tuned-BioBART-2048-inputs-10-epochs
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # fine-tuned-BioBART-2048-inputs-10-epochs
17
+
18
+ This model is a fine-tuned version of [GanjinZero/biobart-v2-base](https://huggingface.co/GanjinZero/biobart-v2-base) on the None dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.7099
21
+ - Rouge1: 0.2904
22
+ - Rouge2: 0.1173
23
+ - Rougel: 0.2687
24
+ - Rougelsum: 0.2692
25
+ - Gen Len: 14.66
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 2e-05
45
+ - train_batch_size: 8
46
+ - eval_batch_size: 8
47
+ - seed: 42
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: linear
50
+ - num_epochs: 10
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
55
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
56
+ | No log | 1.0 | 151 | 0.7536 | 0.2059 | 0.0784 | 0.1881 | 0.1881 | 13.31 |
57
+ | No log | 2.0 | 302 | 0.7161 | 0.2569 | 0.0831 | 0.2279 | 0.2278 | 13.88 |
58
+ | No log | 3.0 | 453 | 0.7013 | 0.2322 | 0.0818 | 0.2055 | 0.2059 | 14.57 |
59
+ | 0.7283 | 4.0 | 604 | 0.6976 | 0.2835 | 0.1095 | 0.2585 | 0.2584 | 14.34 |
60
+ | 0.7283 | 5.0 | 755 | 0.7012 | 0.2749 | 0.0921 | 0.2521 | 0.2528 | 14.35 |
61
+ | 0.7283 | 6.0 | 906 | 0.6963 | 0.2957 | 0.1073 | 0.2688 | 0.269 | 14.97 |
62
+ | 0.5246 | 7.0 | 1057 | 0.7043 | 0.2824 | 0.1067 | 0.257 | 0.257 | 14.68 |
63
+ | 0.5246 | 8.0 | 1208 | 0.7043 | 0.292 | 0.1158 | 0.2706 | 0.2722 | 14.16 |
64
+ | 0.5246 | 9.0 | 1359 | 0.7080 | 0.2849 | 0.1087 | 0.2603 | 0.2615 | 14.69 |
65
+ | 0.4414 | 10.0 | 1510 | 0.7099 | 0.2904 | 0.1173 | 0.2687 | 0.2692 | 14.66 |
66
+
67
+
68
+ ### Framework versions
69
+
70
+ - Transformers 4.36.2
71
+ - Pytorch 1.12.1+cu113
72
+ - Datasets 2.15.0
73
+ - Tokenizers 0.15.0
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 2,
4
+ "early_stopping": true,
5
+ "eos_token_id": 2,
6
+ "forced_eos_token_id": 2,
7
+ "no_repeat_ngram_size": 3,
8
+ "num_beams": 4,
9
+ "pad_token_id": 1,
10
+ "transformers_version": "4.36.2"
11
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:93df04d9613c9d8d7925d09a06762579d7e02780f121dbd20fa78b198d9e3b76
3
  size 665990956
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f1f94bda1c7c591e3e35baba3692736c46b8b67a135b5a911a98d0628aa5065
3
  size 665990956
runs/Dec27_05-04-43_william-gpu-3090-10-rxldr/events.out.tfevents.1703653484.william-gpu-3090-10-rxldr.71969.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:30d5bc72ebd5742d8575d7bc19780cd5fd4e1b3915c95ad3e8d46efa49440786
3
- size 10872
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bf7d6a6840042e72c24bc82e9a771c88e4d20b93ce00c8db0a6edc89200d667
3
+ size 11751