gimarchetti commited on
Commit
26c0502
1 Parent(s): 2b8d399

End of training

Browse files
README.md CHANGED
@@ -15,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.6739
19
 
20
  ## Model description
21
 
@@ -35,36 +35,31 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 0.0001
38
- - train_batch_size: 2
39
- - eval_batch_size: 2
40
  - seed: 42
41
  - gradient_accumulation_steps: 10
42
- - total_train_batch_size: 20
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 50
46
  - num_epochs: 2
47
- - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
- | Training Loss | Epoch | Step | Validation Loss |
52
- |:-------------:|:-----:|:----:|:---------------:|
53
- | 1.5316 | 0.2 | 38 | 0.7854 |
54
- | 0.7931 | 0.4 | 76 | 0.7384 |
55
- | 0.8019 | 0.6 | 114 | 0.7167 |
56
- | 0.7487 | 0.8 | 152 | 0.6992 |
57
- | 0.7416 | 1.0 | 190 | 0.6887 |
58
- | 0.5919 | 1.2 | 228 | 0.6977 |
59
- | 0.5819 | 1.4 | 266 | 0.6903 |
60
- | 0.5948 | 1.6 | 304 | 0.6849 |
61
- | 0.5858 | 1.8 | 342 | 0.6780 |
62
- | 0.5539 | 2.0 | 380 | 0.6739 |
63
 
64
 
65
  ### Framework versions
66
 
67
  - Transformers 4.41.0.dev0
68
- - Pytorch 1.13.1+cu117
69
  - Datasets 2.19.1
70
  - Tokenizers 0.19.1
 
15
 
16
  This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.6872
19
 
20
  ## Model description
21
 
 
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 0.0001
38
+ - train_batch_size: 3
39
+ - eval_batch_size: 3
40
  - seed: 42
41
  - gradient_accumulation_steps: 10
42
+ - total_train_batch_size: 30
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 50
46
  - num_epochs: 2
 
47
 
48
  ### Training results
49
 
50
+ | Training Loss | Epoch | Step | Validation Loss |
51
+ |:-------------:|:------:|:----:|:---------------:|
52
+ | 1.4399 | 0.2999 | 38 | 0.7745 |
53
+ | 0.8072 | 0.5998 | 76 | 0.7329 |
54
+ | 0.7577 | 0.8998 | 114 | 0.7067 |
55
+ | 0.6603 | 1.1997 | 152 | 0.7081 |
56
+ | 0.6099 | 1.4996 | 190 | 0.6960 |
57
+ | 0.6045 | 1.7995 | 228 | 0.6872 |
 
 
 
 
58
 
59
 
60
  ### Framework versions
61
 
62
  - Transformers 4.41.0.dev0
63
+ - Pytorch 2.0.1
64
  - Datasets 2.19.1
65
  - Tokenizers 0.19.1
adapter_config.json CHANGED
@@ -21,6 +21,6 @@
21
  "revision": null,
22
  "target_modules": ".*(text_model|modality_projection|perceiver_resampler).*(down_proj|gate_proj|up_proj|k_proj|q_proj|v_proj|o_proj).*$",
23
  "task_type": null,
24
- "use_dora": false,
25
  "use_rslora": false
26
  }
 
21
  "revision": null,
22
  "target_modules": ".*(text_model|modality_projection|perceiver_resampler).*(down_proj|gate_proj|up_proj|k_proj|q_proj|v_proj|o_proj).*$",
23
  "task_type": null,
24
+ "use_dora": true,
25
  "use_rslora": false
26
  }
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5c4430b955fd1bcca51748433e8bd18ac08a1a0b1a446165741e1c9b3e91b671
3
- size 93378688
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7da3396b30bb4afe2e4eb2206a34b4bebdde289c521757d8f98d916f74b527eb
3
+ size 49840864
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1920691e93e1b989f360a2f842bdbbf92dc0386365f49826cf716a94f45d7f64
3
  size 4731
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b65e69b9f16929f9153aeb81074223e10d976406a3610f8b175bcc5c85163483
3
  size 4731