File size: 4,081 Bytes
3ebda4c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: bsd-3-clause
base_model: Salesforce/codet5-large
tags:
- generated_from_trainer
datasets:
- arrow
library_name: peft
model-index:
- name: codet5-large-2024-11-27_23-08
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# codet5-large-2024-11-27_23-08

This model is a fine-tuned version of [Salesforce/codet5-large](https://huggingface.co/Salesforce/codet5-large) on the arrow dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2038
- Gen Len: 18.9997
- Bertscorer-p: 0.6236
- Bertscorer-r: 0.2353
- Bertscorer-f1: 0.4224
- Sacrebleu-score: 14.0575
- Sacrebleu-precisions: [93.21674851306209, 85.96364041936204, 80.9029722765622, 77.23407849541078]
- Bleu-bp: 0.1671

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions                                                         | Bleu-bp |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:----------------------------------------------------------------------------:|:-------:|
| 0.307         | 1.0   | 2386  | 0.2578          | 18.9996 | 0.6125       | 0.2274       | 0.4130        | 13.3756         | [91.98367952522256, 82.75386027211427, 76.38867305533972, 71.8096760543514]  | 0.1664  |
| 0.2249        | 2.0   | 4772  | 0.2234          | 18.9998 | 0.6129       | 0.2272       | 0.4130        | 13.6165         | [92.0984638163983, 83.32430926029143, 77.59097368761036, 73.62220971675107]  | 0.1673  |
| 0.1705        | 3.0   | 7158  | 0.2052          | 18.9997 | 0.6144       | 0.2270       | 0.4137        | 13.6722         | [92.39128019377347, 83.86287225736548, 78.10795204812425, 74.06802567856741] | 0.1671  |
| 0.1359        | 4.0   | 9544  | 0.1975          | 18.9999 | 0.6180       | 0.2312       | 0.4176        | 13.8305         | [92.69034856516717, 84.60221526799009, 79.12022601595204, 75.24504516334781] | 0.1673  |
| 0.1124        | 5.0   | 11930 | 0.1965          | 18.9997 | 0.6219       | 0.2347       | 0.4212        | 14.0296         | [93.00053938628186, 85.37114434185644, 79.99295344980192, 76.11429212978557] | 0.1683  |
| 0.0901        | 6.0   | 14316 | 0.1953          | 18.9997 | 0.6228       | 0.2341       | 0.4214        | 13.9769         | [93.14913197145842, 85.62195160827568, 80.38468501866524, 76.62666892006084] | 0.1669  |
| 0.0717        | 7.0   | 16702 | 0.1976          | 18.9998 | 0.6252       | 0.2356       | 0.4233        | 14.0892         | [93.2416842914824, 85.8948155335173, 80.64185934489403, 76.84015322512667]   | 0.1679  |
| 0.0608        | 8.0   | 19088 | 0.2002          | 18.9997 | 0.6235       | 0.2355       | 0.4224        | 14.0253         | [93.20067563563089, 85.85486736946112, 80.71698243315461, 76.97434501403373] | 0.1670  |
| 0.0492        | 9.0   | 21474 | 0.2014          | 18.9998 | 0.6256       | 0.2367       | 0.4240        | 14.0964         | [93.32790404975198, 86.14961977943226, 81.03875968992249, 77.30994700558082] | 0.1673  |
| 0.0428        | 10.0  | 23860 | 0.2038          | 18.9997 | 0.6236       | 0.2353       | 0.4224        | 14.0575         | [93.21674851306209, 85.96364041936204, 80.9029722765622, 77.23407849541078]  | 0.1671  |


### Framework versions

- PEFT 0.13.2
- Transformers 4.40.1
- Pytorch 1.13.1+cu117
- Datasets 3.1.0
- Tokenizers 0.19.1