Chris Alexiuk commited on
Commit
e024f1e
1 Parent(s): 77956ee

ai-maker-space/mistral-7binstruct-summary-100s

Browse files
README.md CHANGED
@@ -20,7 +20,7 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 1.4679
24
 
25
  ## Model description
26
 
@@ -52,14 +52,14 @@ The following hyperparameters were used during training:
52
 
53
  | Training Loss | Epoch | Step | Validation Loss |
54
  |:-------------:|:-----:|:----:|:---------------:|
55
- | 1.679 | 0.22 | 25 | 1.5459 |
56
- | 1.557 | 0.43 | 50 | 1.4679 |
57
 
58
 
59
  ### Framework versions
60
 
61
- - PEFT 0.8.2
62
- - Transformers 4.38.1
63
- - Pytorch 2.1.0+cu121
64
- - Datasets 2.17.1
65
  - Tokenizers 0.15.2
 
20
 
21
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 1.4676
24
 
25
  ## Model description
26
 
 
52
 
53
  | Training Loss | Epoch | Step | Validation Loss |
54
  |:-------------:|:-----:|:----:|:---------------:|
55
+ | 1.6649 | 0.22 | 25 | 1.5493 |
56
+ | 1.6041 | 0.43 | 50 | 1.4676 |
57
 
58
 
59
  ### Framework versions
60
 
61
+ - PEFT 0.10.0
62
+ - Transformers 4.39.3
63
+ - Pytorch 2.2.1+cu121
64
+ - Datasets 2.18.0
65
  - Tokenizers 0.15.2
adapter_config.json CHANGED
@@ -6,6 +6,7 @@
6
  "fan_in_fan_out": false,
7
  "inference_mode": true,
8
  "init_lora_weights": true,
 
9
  "layers_pattern": null,
10
  "layers_to_transform": null,
11
  "loftq_config": {},
@@ -23,5 +24,6 @@
23
  "q_proj"
24
  ],
25
  "task_type": "CAUSAL_LM",
 
26
  "use_rslora": false
27
  }
 
6
  "fan_in_fan_out": false,
7
  "inference_mode": true,
8
  "init_lora_weights": true,
9
+ "layer_replication": null,
10
  "layers_pattern": null,
11
  "layers_to_transform": null,
12
  "loftq_config": {},
 
24
  "q_proj"
25
  ],
26
  "task_type": "CAUSAL_LM",
27
+ "use_dora": false,
28
  "use_rslora": false
29
  }
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:371726668e0352eb0ddb0ee13fdd91e3101b1998bbcb619f9fa9e6c74e97e859
3
  size 27280152
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:524f35bc029cca42390cf052de4f6c8956b77cd297ad794fc1ea7c1adaf502ad
3
  size 27280152
runs/Apr16_22-25-52_75d617b1173a/events.out.tfevents.1713306385.75d617b1173a.402.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26ed5f4c9554450a61dc6801f76771475d7e852e324051cd78ca7142ae590163
3
+ size 7037
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bcc87dd0b7311063683f72f4c258a74b2d48fa674885727d339c7a9f93382ace
3
  size 4920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e724461c3b1bc07935f66fbe93443f4fbd5943ebfe5a0ec0629c37875208d8f
3
  size 4920