JoseBambora commited on
Commit
5021994
1 Parent(s): abf7452

End of training

Browse files
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 2.1265
20
 
21
  ## Model description
22
 
@@ -51,10 +51,10 @@ The following hyperparameters were used during training:
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
- | 1.9839 | 1.0 | 225 | 2.0810 |
55
- | 1.8917 | 2.0 | 450 | 2.0692 |
56
- | 1.7552 | 3.0 | 675 | 2.0989 |
57
- | 1.6631 | 4.0 | 900 | 2.1265 |
58
 
59
 
60
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 2.1389
20
 
21
  ## Model description
22
 
 
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
+ | 2.0313 | 1.0 | 225 | 2.0849 |
55
+ | 1.9785 | 2.0 | 450 | 2.0891 |
56
+ | 1.7995 | 3.0 | 675 | 2.1143 |
57
+ | 1.7266 | 4.0 | 900 | 2.1389 |
58
 
59
 
60
  ### Framework versions
adapter_config.json CHANGED
@@ -19,8 +19,8 @@
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
22
- "q_proj",
23
- "v_proj"
24
  ],
25
  "task_type": "CAUSAL_LM"
26
  }
 
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
22
+ "v_proj",
23
+ "q_proj"
24
  ],
25
  "task_type": "CAUSAL_LM"
26
  }
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6c5d35fba5afe9c76bc9579561a21c823fed493bea9abecba0567602f36ec9f6
3
  size 109069176
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea70d2d6c6098227b442c05809e3f350477f32cae4119d03118ccb1dbc66cc07
3
  size 109069176
runs/May04_17-19-17_b3e634cb498c/events.out.tfevents.1714843160.b3e634cb498c.544.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ccdfed9d067303bfdde98e98b9b96d459703ef964fb51e8aabe0d5576c1b77c3
3
+ size 11853
tokenizer.json CHANGED
@@ -1,11 +1,6 @@
1
  {
2
  "version": "1.0",
3
- "truncation": {
4
- "direction": "Right",
5
- "max_length": 512,
6
- "strategy": "LongestFirst",
7
- "stride": 0
8
- },
9
  "padding": null,
10
  "added_tokens": [
11
  {
 
1
  {
2
  "version": "1.0",
3
+ "truncation": null,
 
 
 
 
 
4
  "padding": null,
5
  "added_tokens": [
6
  {
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:962c487ebcecdbb09f5ea4685f8faad4d6080941b1e982375c8e85bdfe23bbfa
3
  size 4283
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f77e0945d704f8e785e1b0e58314590a2a93eedba24af316015132b366e0a5d
3
  size 4283