File size: 4,067 Bytes
bd074a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
base_model: facebook/bart-large-cnn
library_name: peft
license: mit
tags:
- generated_from_trainer
model-index:
- name: lora_fine_tuned_bart
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/nedith22-makerere-university/Fatima%20Fellowahip2024/runs/pvowwaid)
# lora_fine_tuned_bart

This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6906

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50

### Training results

| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7253        | 1.0   | 32   | 1.8719          |
| 1.426         | 2.0   | 64   | 1.6167          |
| 1.3662        | 3.0   | 96   | 1.5225          |
| 1.3049        | 4.0   | 128  | 1.4695          |
| 1.2084        | 5.0   | 160  | 1.4319          |
| 1.2666        | 6.0   | 192  | 1.3709          |
| 1.2344        | 7.0   | 224  | 1.3053          |
| 1.1056        | 8.0   | 256  | 1.2560          |
| 1.025         | 9.0   | 288  | 1.1773          |
| 0.915         | 10.0  | 320  | 1.0743          |
| 0.8726        | 11.0  | 352  | 1.0085          |
| 0.8281        | 12.0  | 384  | 0.9630          |
| 0.777         | 13.0  | 416  | 0.9116          |
| 0.7681        | 14.0  | 448  | 0.8817          |
| 0.664         | 15.0  | 480  | 0.8357          |
| 0.6604        | 16.0  | 512  | 0.8077          |
| 0.6351        | 17.0  | 544  | 0.7837          |
| 0.6455        | 18.0  | 576  | 0.7724          |
| 0.6167        | 19.0  | 608  | 0.7585          |
| 0.5969        | 20.0  | 640  | 0.7443          |
| 0.5605        | 21.0  | 672  | 0.7382          |
| 0.5835        | 22.0  | 704  | 0.7302          |
| 0.5668        | 23.0  | 736  | 0.7183          |
| 0.575         | 24.0  | 768  | 0.7124          |
| 0.5319        | 25.0  | 800  | 0.7129          |
| 0.5515        | 26.0  | 832  | 0.7085          |
| 0.5219        | 27.0  | 864  | 0.7119          |
| 0.5509        | 28.0  | 896  | 0.7074          |
| 0.5172        | 29.0  | 928  | 0.7014          |
| 0.5298        | 30.0  | 960  | 0.7034          |
| 0.5071        | 31.0  | 992  | 0.6930          |
| 0.525         | 32.0  | 1024 | 0.6941          |
| 0.5153        | 33.0  | 1056 | 0.6963          |
| 0.5115        | 34.0  | 1088 | 0.6925          |
| 0.5194        | 35.0  | 1120 | 0.6933          |
| 0.5138        | 36.0  | 1152 | 0.6926          |
| 0.4649        | 37.0  | 1184 | 0.6913          |
| 0.5127        | 38.0  | 1216 | 0.6932          |
| 0.5044        | 39.0  | 1248 | 0.6929          |
| 0.4701        | 40.0  | 1280 | 0.6921          |
| 0.5156        | 41.0  | 1312 | 0.6931          |
| 0.5163        | 42.0  | 1344 | 0.6898          |
| 0.5153        | 43.0  | 1376 | 0.6896          |
| 0.5054        | 44.0  | 1408 | 0.6880          |
| 0.4915        | 45.0  | 1440 | 0.6872          |
| 0.4908        | 46.0  | 1472 | 0.6879          |
| 0.4836        | 47.0  | 1504 | 0.6891          |
| 0.491         | 48.0  | 1536 | 0.6889          |
| 0.4814        | 49.0  | 1568 | 0.6905          |
| 0.4872        | 50.0  | 1600 | 0.6906          |


### Framework versions

- PEFT 0.12.0
- Transformers 4.42.3
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1