vdavidr commited on
Commit
76b30f3
1 Parent(s): cdfb1b1

Add readme

Browse files
Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ base_model: codellama/CodeLlama-7b-Instruct-hf
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ - bleu
9
+ - sacrebleu
10
+ - rouge
11
+ model-index:
12
+ - name: CodeLlama-7b-Instruct-hf_Fi__components_size_252_epochs_10_2024-06-21_09-35-27_3556547
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # CodeLlama-7b-Instruct-hf_Fi__components_size_252_epochs_10_2024-06-21_09-35-27_3556547
20
+
21
+ This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 1.9096
24
+ - Accuracy: 0.462
25
+ - Chrf: 0.297
26
+ - Bleu: 0.225
27
+ - Sacrebleu: 0.2
28
+ - Rouge1: 0.472
29
+ - Rouge2: 0.3
30
+ - Rougel: 0.459
31
+ - Rougelsum: 0.471
32
+ - Meteor: 0.505
33
+
34
+ ## Model description
35
+
36
+ More information needed
37
+
38
+ ## Intended uses & limitations
39
+
40
+ More information needed
41
+
42
+ ## Training and evaluation data
43
+
44
+ More information needed
45
+
46
+ ## Training procedure
47
+
48
+ ### Training hyperparameters
49
+
50
+ The following hyperparameters were used during training:
51
+ - learning_rate: 0.001
52
+ - train_batch_size: 1
53
+ - eval_batch_size: 1
54
+ - seed: 3407
55
+ - distributed_type: multi-GPU
56
+ - num_devices: 4
57
+ - total_train_batch_size: 4
58
+ - total_eval_batch_size: 4
59
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
60
+ - lr_scheduler_type: linear
61
+ - lr_scheduler_warmup_steps: 252
62
+ - training_steps: 2520
63
+
64
+ ### Training results
65
+
66
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Chrf | Bleu | Sacrebleu | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor |
67
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|:-----:|:---------:|:------:|:------:|:------:|:---------:|:------:|
68
+ | 0.063 | 4.0 | 252 | 3.6864 | 0.457 | 0.044 | 0.0 | 0.0 | 0.044 | 0.0 | 0.03 | 0.03 | 0.138 |
69
+ | 0.0742 | 8.0 | 504 | 2.7260 | 0.474 | 0.104 | 0.036 | 0.0 | 0.148 | 0.009 | 0.126 | 0.143 | 0.24 |
70
+ | 0.0774 | 12.0 | 756 | 2.6054 | 0.461 | 0.159 | 0.099 | 0.1 | 0.315 | 0.149 | 0.306 | 0.308 | 0.325 |
71
+ | 0.7995 | 16.0 | 1008 | 2.4395 | 0.465 | 0.215 | 0.119 | 0.1 | 0.393 | 0.178 | 0.365 | 0.379 | 0.359 |
72
+ | 0.1761 | 20.0 | 1260 | 2.4190 | 0.482 | 0.249 | 0.164 | 0.2 | 0.356 | 0.194 | 0.34 | 0.355 | 0.39 |
73
+ | 0.4002 | 24.0 | 1512 | 2.1404 | 0.462 | 0.251 | 0.188 | 0.2 | 0.418 | 0.269 | 0.4 | 0.409 | 0.437 |
74
+ | 0.0254 | 28.0 | 1764 | 2.0202 | 0.46 | 0.295 | 0.192 | 0.2 | 0.484 | 0.308 | 0.461 | 0.478 | 0.463 |
75
+ | 0.1469 | 32.0 | 2016 | 1.9957 | 0.462 | 0.289 | 0.225 | 0.2 | 0.448 | 0.291 | 0.44 | 0.443 | 0.482 |
76
+ | 0.0346 | 36.0 | 2268 | 1.9562 | 0.46 | 0.293 | 0.2 | 0.2 | 0.474 | 0.278 | 0.452 | 0.471 | 0.491 |
77
+ | 0.0378 | 40.0 | 2520 | 1.9096 | 0.462 | 0.297 | 0.225 | 0.2 | 0.472 | 0.3 | 0.459 | 0.471 | 0.505 |
78
+
79
+
80
+ ### Framework versions
81
+
82
+ - Transformers 4.37.0
83
+ - Pytorch 2.2.1+cu121
84
+ - Datasets 2.20.0
85
+ - Tokenizers 0.15.2