vxbrandon commited on
Commit
d98cf74
1 Parent(s): 2a1e815

End of training

Browse files
README.md CHANGED
@@ -4,18 +4,18 @@ base_model: mistralai/Mistral-7B-v0.1
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
- - name: sparse_mistral_7b_refined_web_50p_2024-04-12
8
  results: []
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
  should probably proofread and complete it, then remove this comment. -->
13
 
14
- # sparse_mistral_7b_refined_web_50p_2024-04-12
15
 
16
  This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 2.2135
19
 
20
  ## Model description
21
 
@@ -45,7 +45,7 @@ The following hyperparameters were used during training:
45
  - total_eval_batch_size: 16
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
- - training_steps: 350
49
 
50
  ### Training results
51
 
@@ -61,10 +61,20 @@ The following hyperparameters were used during training:
61
  | 2.2996 | 0.06 | 200 | 2.2678 |
62
  | 2.2788 | 0.07 | 225 | 2.2647 |
63
  | 2.2448 | 0.08 | 250 | 2.2637 |
64
- | 2.1813 | 0.09 | 275 | 2.2619 |
65
- | 2.2059 | 0.1 | 300 | 2.2602 |
66
- | 2.2689 | 0.1 | 325 | 2.2582 |
67
- | 2.2238 | 0.11 | 350 | 2.2579 |
 
 
 
 
 
 
 
 
 
 
68
 
69
 
70
  ### Framework versions
 
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
+ - name: sparse_mistral_7b_refined_web_50p_2024-04-13
8
  results: []
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
  should probably proofread and complete it, then remove this comment. -->
13
 
14
+ # sparse_mistral_7b_refined_web_50p_2024-04-13
15
 
16
  This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 2.2091
19
 
20
  ## Model description
21
 
 
45
  - total_eval_batch_size: 16
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
+ - training_steps: 600
49
 
50
  ### Training results
51
 
 
61
  | 2.2996 | 0.06 | 200 | 2.2678 |
62
  | 2.2788 | 0.07 | 225 | 2.2647 |
63
  | 2.2448 | 0.08 | 250 | 2.2637 |
64
+ | 2.1837 | 0.09 | 275 | 2.2624 |
65
+ | 2.2089 | 0.1 | 300 | 2.2621 |
66
+ | 2.2686 | 0.1 | 325 | 2.2601 |
67
+ | 2.2254 | 0.11 | 350 | 2.2593 |
68
+ | 2.162 | 0.12 | 375 | 2.2590 |
69
+ | 2.2687 | 0.13 | 400 | 2.2563 |
70
+ | 2.2595 | 0.14 | 425 | 2.2571 |
71
+ | 2.186 | 0.14 | 450 | 2.2564 |
72
+ | 2.2689 | 0.15 | 475 | 2.2580 |
73
+ | 2.2472 | 0.16 | 500 | 2.2554 |
74
+ | 2.201 | 0.17 | 525 | 2.2538 |
75
+ | 2.1977 | 0.18 | 550 | 2.2557 |
76
+ | 2.238 | 0.18 | 575 | 2.2562 |
77
+ | 2.1437 | 0.19 | 600 | 2.2550 |
78
 
79
 
80
  ### Framework versions
model-00001-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:07b9355dab70bf7351570f1681f5de0e42ae8988305728dec9567a8a7db48fd8
3
  size 4943162336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bdfdfc4591fa8fc25b756a220487b87f93381a8a226cc5b7770a75c61c38aa6
3
  size 4943162336
model-00002-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:643924457a6246b2cfef7c06c92bef2b7653837e5b0879c5f740206ab222e044
3
  size 4999819336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f085263f32f6960c6d5ef40ee4247ff7505791c6446c49c09096f23773eaee65
3
  size 4999819336
model-00003-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:98ead7c7ae12dcd392b05ff80c4eb800fc228e51c2b3c5647b1cd6dc9a6a62c2
3
  size 4540516344
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9381852bdc2f47bb860792fa8b07b2ef0ba7f5e3d9b88d373b82bae7e36e0d67
3
  size 4540516344