sehilnlf commited on
Commit
7bb5d7e
1 Parent(s): ff74325
README.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: facebook/bart-large
4
+ tags:
5
+ - text2text-generation
6
+ - generated_from_trainer
7
+ metrics:
8
+ - sacrebleu
9
+ model-index:
10
+ - name: model_v3_v2
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # model_v3_v2
18
+
19
+ This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 1.1977
22
+ - Sacrebleu: 66.7256
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 5e-05
42
+ - train_batch_size: 32
43
+ - eval_batch_size: 32
44
+ - seed: 42
45
+ - gradient_accumulation_steps: 8
46
+ - total_train_batch_size: 256
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: linear
49
+ - num_epochs: 40
50
+ - mixed_precision_training: Native AMP
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Sacrebleu |
55
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|
56
+ | No log | 0.99 | 54 | 0.5648 | 65.7974 |
57
+ | No log | 1.99 | 109 | 0.6224 | 66.8854 |
58
+ | No log | 3.0 | 164 | 0.6639 | 66.8333 |
59
+ | No log | 4.0 | 219 | 0.5929 | 66.7857 |
60
+ | No log | 4.99 | 273 | 0.6427 | 65.8395 |
61
+ | No log | 5.99 | 328 | 0.6721 | 66.4172 |
62
+ | No log | 7.0 | 383 | 0.7511 | 66.4660 |
63
+ | No log | 8.0 | 438 | 0.7662 | 66.6480 |
64
+ | No log | 8.99 | 492 | 0.7588 | 66.5092 |
65
+ | No log | 9.99 | 547 | 0.7916 | 66.5144 |
66
+ | No log | 11.0 | 602 | 0.8172 | 66.6279 |
67
+ | No log | 12.0 | 657 | 0.8350 | 66.5607 |
68
+ | No log | 12.99 | 711 | 0.8809 | 66.6095 |
69
+ | No log | 13.99 | 766 | 0.8843 | 66.4089 |
70
+ | No log | 15.0 | 821 | 1.0130 | 66.5184 |
71
+ | No log | 16.0 | 876 | 0.9180 | 66.4269 |
72
+ | No log | 16.99 | 930 | 0.9794 | 66.5766 |
73
+ | No log | 17.99 | 985 | 0.9450 | 66.6713 |
74
+ | No log | 19.0 | 1040 | 0.9880 | 66.7081 |
75
+ | No log | 20.0 | 1095 | 0.9540 | 66.4440 |
76
+ | No log | 20.99 | 1149 | 1.0552 | 66.5390 |
77
+ | No log | 21.99 | 1204 | 0.9806 | 66.5975 |
78
+ | No log | 23.0 | 1259 | 1.0528 | 66.6404 |
79
+ | No log | 24.0 | 1314 | 1.0348 | 66.4127 |
80
+ | No log | 24.99 | 1368 | 1.0758 | 66.6139 |
81
+ | No log | 25.99 | 1423 | 1.1291 | 66.6778 |
82
+ | No log | 27.0 | 1478 | 1.1112 | 66.6411 |
83
+ | No log | 28.0 | 1533 | 1.1305 | 66.5986 |
84
+ | No log | 28.99 | 1587 | 1.1532 | 66.5047 |
85
+ | No log | 29.99 | 1642 | 1.1106 | 66.5662 |
86
+ | No log | 31.0 | 1697 | 1.2084 | 66.6593 |
87
+ | No log | 32.0 | 1752 | 1.1438 | 66.6117 |
88
+ | No log | 32.99 | 1806 | 1.1956 | 66.6758 |
89
+ | No log | 33.99 | 1861 | 1.1630 | 66.7359 |
90
+ | No log | 35.0 | 1916 | 1.1570 | 66.6989 |
91
+ | No log | 36.0 | 1971 | 1.1754 | 66.6495 |
92
+ | No log | 36.99 | 2025 | 1.2456 | 66.7018 |
93
+ | No log | 37.99 | 2080 | 1.2197 | 66.7990 |
94
+ | No log | 39.0 | 2135 | 1.1886 | 66.7049 |
95
+ | No log | 39.45 | 2160 | 1.1977 | 66.7256 |
96
+
97
+
98
+ ### Framework versions
99
+
100
+ - Transformers 4.39.3
101
+ - Pytorch 2.1.2
102
+ - Datasets 2.18.0
103
+ - Tokenizers 0.15.2
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 2,
4
+ "early_stopping": true,
5
+ "eos_token_id": 2,
6
+ "forced_bos_token_id": 0,
7
+ "forced_eos_token_id": 2,
8
+ "no_repeat_ngram_size": 3,
9
+ "num_beams": 4,
10
+ "pad_token_id": 1,
11
+ "transformers_version": "4.39.3"
12
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1a3c8edb79350ac803e2c3b7551e5eb00b943ced7e4eee550423372e35933c91
3
  size 1625426996
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35344336745e2056fea2dbd01b859ba07164c11ed877c89e49579b87a679760f
3
  size 1625426996
runs/May26_06-54-36_257b58ae2cb4/events.out.tfevents.1716706627.257b58ae2cb4.24.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:516553ca21b673b222da064efc6921c00b3e84a029356a5ce5103bee6d66a67c
3
- size 18982
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:493c8985e568a1343799976cc667fc39b4cb957c348fd8c560bef33d2c51ec3d
3
+ size 19336