ospanbatyr commited on
Commit
45693d6
1 Parent(s): 4b1b695

Model save

Browse files
Files changed (1) hide show
  1. README.md +1 -25
README.md CHANGED
@@ -14,8 +14,6 @@ should probably proofread and complete it, then remove this comment. -->
14
  # idefics-9b-instruct-ft-instruct-compact
15
 
16
  This model is a fine-tuned version of [HuggingFaceM4/idefics-9b-instruct](https://huggingface.co/HuggingFaceM4/idefics-9b-instruct) on an unknown dataset.
17
- It achieves the following results on the evaluation set:
18
- - Loss: nan
19
 
20
  ## Model description
21
 
@@ -34,7 +32,7 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 5e-05
38
  - train_batch_size: 4
39
  - eval_batch_size: 8
40
  - seed: 42
@@ -46,28 +44,6 @@ The following hyperparameters were used during training:
46
  - num_epochs: 3
47
  - mixed_precision_training: Native AMP
48
 
49
- ### Training results
50
-
51
- | Training Loss | Epoch | Step | Validation Loss |
52
- |:-------------:|:-----:|:----:|:---------------:|
53
- | 3.7449 | 0.18 | 25 | 3.3985 |
54
- | 2.0448 | 0.36 | 50 | 1.6004 |
55
- | 0.7728 | 0.54 | 75 | 0.5974 |
56
- | 0.5519 | 0.73 | 100 | 0.5353 |
57
- | 0.5128 | 0.91 | 125 | nan |
58
- | 6.5038 | 1.09 | 150 | nan |
59
- | 8.2169 | 1.27 | 175 | nan |
60
- | 3.1474 | 1.45 | 200 | nan |
61
- | 3.2837 | 1.63 | 225 | nan |
62
- | 2.7581 | 1.81 | 250 | nan |
63
- | 6.01 | 2.0 | 275 | nan |
64
- | 5.9971 | 2.18 | 300 | nan |
65
- | 6.5173 | 2.36 | 325 | nan |
66
- | 10.7043 | 2.54 | 350 | nan |
67
- | 2.5312 | 2.72 | 375 | nan |
68
- | 3.013 | 2.9 | 400 | nan |
69
-
70
-
71
  ### Framework versions
72
 
73
  - Transformers 4.35.2
 
14
  # idefics-9b-instruct-ft-instruct-compact
15
 
16
  This model is a fine-tuned version of [HuggingFaceM4/idefics-9b-instruct](https://huggingface.co/HuggingFaceM4/idefics-9b-instruct) on an unknown dataset.
 
 
17
 
18
  ## Model description
19
 
 
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
35
+ - learning_rate: 2e-05
36
  - train_batch_size: 4
37
  - eval_batch_size: 8
38
  - seed: 42
 
44
  - num_epochs: 3
45
  - mixed_precision_training: Native AMP
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ### Framework versions
48
 
49
  - Transformers 4.35.2