File size: 4,064 Bytes
6ea6f2f
 
d0e2028
6ea6f2f
 
 
 
 
 
 
 
 
 
 
 
d0e2028
6ea6f2f
b4151c0
 
 
6ea6f2f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25f6133
ff30341
e3c8509
6c2f3ab
40e6615
2b5f59d
fc35459
219142e
1e0ffcd
41213fc
7eb507c
64e514f
3960e9d
4f16080
2e90df9
d9969b1
f81a95b
ff8cd4c
cddaf36
fc802f5
15ecfd9
d59c414
83ece39
7ce78d0
bbcad9f
9a7cbc9
9ddda77
1c55bc3
 
a59c794
1a87b7a
4996b6b
50f881f
81b0c44
87b35eb
55b0826
92276a2
6650e0d
ef5782b
82ed31f
e6cfcf0
a8d1188
32f0564
 
 
a3c3029
fcb117f
5724768
d8612b9
c15007f
474fe74
e0273ea
6d9bbb4
b455273
4d5d1f8
6ea49d8
0dbe9ea
1ac703e
744ee66
26c902d
256b3b7
d91b2fe
 
b4151c0
6ea6f2f
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
---
license: apache-2.0
base_model: bedus-creation/mBart-small-dataset-ii-eng-lim-003
tags:
- generated_from_keras_callback
model-index:
- name: bedus-creation/mBart-small-dataset-ii-eng-lim-003
  results: []
---

<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->

# bedus-creation/mBart-small-dataset-ii-eng-lim-003

This model is a fine-tuned version of [bedus-creation/mBart-small-dataset-ii-eng-lim-003](https://huggingface.co/bedus-creation/mBart-small-dataset-ii-eng-lim-003) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1460
- Validation Loss: 0.2778
- Epoch: 63

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32

### Training results

| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2093     | 0.2072          | 0     |
| 0.2068     | 0.2056          | 1     |
| 0.2062     | 0.2023          | 2     |
| 0.2045     | 0.2054          | 3     |
| 0.2027     | 0.2188          | 4     |
| 0.2019     | 0.2067          | 5     |
| 0.1997     | 0.2056          | 6     |
| 0.1991     | 0.2074          | 7     |
| 0.1978     | 0.2024          | 8     |
| 0.1962     | 0.2067          | 9     |
| 0.1955     | 0.2074          | 10    |
| 0.1945     | 0.2089          | 11    |
| 0.1928     | 0.2168          | 12    |
| 0.1907     | 0.2201          | 13    |
| 0.1900     | 0.2102          | 14    |
| 0.1888     | 0.2130          | 15    |
| 0.1882     | 0.2211          | 16    |
| 0.1870     | 0.2117          | 17    |
| 0.1857     | 0.2134          | 18    |
| 0.1838     | 0.2147          | 19    |
| 0.1824     | 0.2187          | 20    |
| 0.1812     | 0.2224          | 21    |
| 0.1813     | 0.2249          | 22    |
| 0.1798     | 0.2200          | 23    |
| 0.1787     | 0.2273          | 24    |
| 0.1772     | 0.2263          | 25    |
| 0.1780     | 0.2273          | 26    |
| 0.1764     | 0.2270          | 27    |
| 0.1754     | 0.2245          | 28    |
| 0.1738     | 0.2260          | 29    |
| 0.1730     | 0.2327          | 30    |
| 0.1720     | 0.2300          | 31    |
| 0.1702     | 0.2347          | 32    |
| 0.1698     | 0.2396          | 33    |
| 0.1689     | 0.2340          | 34    |
| 0.1693     | 0.2345          | 35    |
| 0.1661     | 0.2424          | 36    |
| 0.1663     | 0.2388          | 37    |
| 0.1658     | 0.2436          | 38    |
| 0.1654     | 0.2506          | 39    |
| 0.1639     | 0.2406          | 40    |
| 0.1635     | 0.2524          | 41    |
| 0.1619     | 0.2379          | 42    |
| 0.1609     | 0.2449          | 43    |
| 0.1602     | 0.2466          | 44    |
| 0.1602     | 0.2537          | 45    |
| 0.1586     | 0.2457          | 46    |
| 0.1576     | 0.2589          | 47    |
| 0.1573     | 0.2547          | 48    |
| 0.1566     | 0.2532          | 49    |
| 0.1546     | 0.2565          | 50    |
| 0.1540     | 0.2544          | 51    |
| 0.1545     | 0.2637          | 52    |
| 0.1515     | 0.2580          | 53    |
| 0.1520     | 0.2654          | 54    |
| 0.1524     | 0.2650          | 55    |
| 0.1513     | 0.2701          | 56    |
| 0.1500     | 0.2767          | 57    |
| 0.1492     | 0.2646          | 58    |
| 0.1483     | 0.2696          | 59    |
| 0.1480     | 0.2729          | 60    |
| 0.1475     | 0.2709          | 61    |
| 0.1458     | 0.2757          | 62    |
| 0.1460     | 0.2778          | 63    |


### Framework versions

- Transformers 4.33.3
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3