Sibinraj commited on
Commit
d9713d9
1 Parent(s): c417efe

Sibinraj/mistral-instruct-ft2

Browse files
README.md CHANGED
@@ -18,7 +18,12 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.3261
 
 
 
 
 
22
 
23
  ## Model description
24
 
@@ -39,24 +44,17 @@ More information needed
39
  The following hyperparameters were used during training:
40
  - learning_rate: 0.0002
41
  - train_batch_size: 4
42
- - eval_batch_size: 1
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: constant
46
  - lr_scheduler_warmup_steps: 2
47
- - num_epochs: 1
48
  - mixed_precision_training: Native AMP
49
 
50
- ### Training results
51
-
52
- | Training Loss | Epoch | Step | Validation Loss |
53
- |:-------------:|:-----:|:----:|:---------------:|
54
- | 0.3709 | 1.0 | 657 | 0.3261 |
55
-
56
-
57
  ### Framework versions
58
 
59
- - PEFT 0.11.1
60
  - Transformers 4.42.3
61
  - Pytorch 2.1.2
62
  - Datasets 2.20.0
 
18
 
19
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - eval_loss: 0.3216
22
+ - eval_runtime: 426.3494
23
+ - eval_samples_per_second: 1.541
24
+ - eval_steps_per_second: 0.387
25
+ - epoch: 1.0
26
+ - step: 657
27
 
28
  ## Model description
29
 
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 0.0002
46
  - train_batch_size: 4
47
+ - eval_batch_size: 4
48
  - seed: 42
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: constant
51
  - lr_scheduler_warmup_steps: 2
52
+ - num_epochs: 2
53
  - mixed_precision_training: Native AMP
54
 
 
 
 
 
 
 
 
55
  ### Framework versions
56
 
57
+ - PEFT 0.12.0
58
  - Transformers 4.42.3
59
  - Pytorch 2.1.2
60
  - Datasets 2.20.0
adapter_config.json CHANGED
@@ -20,13 +20,13 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "k_proj",
24
  "o_proj",
25
- "lm_head",
26
- "down_proj",
27
  "q_proj",
28
- "v_proj",
 
29
  "up_proj",
 
 
30
  "gate_proj"
31
  ],
32
  "task_type": "CAUSAL_LM",
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
23
  "o_proj",
 
 
24
  "q_proj",
25
+ "k_proj",
26
+ "down_proj",
27
  "up_proj",
28
+ "lm_head",
29
+ "v_proj",
30
  "gate_proj"
31
  ],
32
  "task_type": "CAUSAL_LM",
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9f2e16a5be827477e59a5d6847bf7a7ec3afed264d65b2046156b225a25b0c6c
3
  size 1204678496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0063a1371acc9374cf9014d4c097a07dbe027f2b68e073734cd5e889314a33c4
3
  size 1204678496
runs/Jul25_08-54-30_221a6d18e628/events.out.tfevents.1721897680.221a6d18e628.34.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0bca6e9379480e5a1ea64e7c8c61f769fcb108ede19c94c09a6db7ae54b0179
3
+ size 6113
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c538960abc70629376921d5a58b50390216d682b5de3278bdf6360635dc6ca07
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:574d888ac97bbb7913ddf1daeaa2acbb8d2918e7ad8c02cbc7def9d4df8c0b9f
3
  size 5368