juletxara commited on
Commit
e8e4496
1 Parent(s): 102b92b

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: alpaca-lora-13b-en-pt-es-ca-eu-gl-at
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
+
13
+ # alpaca-lora-13b-en-pt-es-ca-eu-gl-at
14
+
15
+ This model is a fine-tuned version of [decapoda-research/llama-13b-hf](https://huggingface.co/decapoda-research/llama-13b-hf) on the None dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 0.9967
18
+
19
+ ## Model description
20
+
21
+ More information needed
22
+
23
+ ## Intended uses & limitations
24
+
25
+ More information needed
26
+
27
+ ## Training and evaluation data
28
+
29
+ More information needed
30
+
31
+ ## Training procedure
32
+
33
+ ### Training hyperparameters
34
+
35
+ The following hyperparameters were used during training:
36
+ - learning_rate: 0.0003
37
+ - train_batch_size: 16
38
+ - eval_batch_size: 16
39
+ - seed: 42
40
+ - gradient_accumulation_steps: 8
41
+ - total_train_batch_size: 128
42
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
+ - lr_scheduler_type: cosine
44
+ - lr_scheduler_warmup_ratio: 0.03
45
+ - num_epochs: 1
46
+ - mixed_precision_training: Native AMP
47
+
48
+ ### Training results
49
+
50
+ | Training Loss | Epoch | Step | Validation Loss |
51
+ |:-------------:|:-----:|:----:|:---------------:|
52
+ | 1.303 | 0.04 | 100 | 1.2875 |
53
+ | 1.2153 | 0.07 | 200 | 1.2016 |
54
+ | 1.1584 | 0.11 | 300 | 1.1560 |
55
+ | 1.1426 | 0.15 | 400 | 1.1277 |
56
+ | 1.1198 | 0.18 | 500 | 1.1063 |
57
+ | 1.0631 | 0.22 | 600 | 1.0911 |
58
+ | 1.0714 | 0.26 | 700 | 1.0773 |
59
+ | 1.0505 | 0.29 | 800 | 1.0667 |
60
+ | 1.0475 | 0.33 | 900 | 1.0562 |
61
+ | 1.0411 | 0.37 | 1000 | 1.0485 |
62
+ | 1.0418 | 0.4 | 1100 | 1.0413 |
63
+ | 1.0419 | 0.44 | 1200 | 1.0339 |
64
+ | 1.0315 | 0.48 | 1300 | 1.0290 |
65
+ | 1.0235 | 0.51 | 1400 | 1.0238 |
66
+ | 1.0308 | 0.55 | 1500 | 1.0189 |
67
+ | 1.0039 | 0.59 | 1600 | 1.0157 |
68
+ | 1.0048 | 0.62 | 1700 | 1.0110 |
69
+ | 0.9982 | 0.66 | 1800 | 1.0080 |
70
+ | 1.0196 | 0.7 | 1900 | 1.0049 |
71
+ | 1.019 | 0.73 | 2000 | 1.0030 |
72
+ | 1.0037 | 0.77 | 2100 | 1.0009 |
73
+ | 1.0003 | 0.81 | 2200 | 0.9995 |
74
+ | 0.9942 | 0.84 | 2300 | 0.9982 |
75
+ | 0.9986 | 0.88 | 2400 | 0.9974 |
76
+ | 0.9987 | 0.92 | 2500 | 0.9969 |
77
+ | 0.9763 | 0.95 | 2600 | 0.9967 |
78
+ | 0.9733 | 0.99 | 2700 | 0.9967 |
79
+
80
+
81
+ ### Framework versions
82
+
83
+ - Transformers 4.28.0.dev0
84
+ - Pytorch 2.0.0+cu117
85
+ - Datasets 2.10.1
86
+ - Tokenizers 0.13.2