Stern5497 commited on
Commit
84b3b58
1 Parent(s): a8f05d6

mistral-lp2-org_aug_a

Browse files
README.md CHANGED
@@ -16,10 +16,10 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 1.6260
20
- - F1 Micro: 0.5230
21
- - F1 Macro: 0.5155
22
- - F1 Weighted: 0.5274
23
 
24
  ## Model description
25
 
@@ -39,18 +39,18 @@ More information needed
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 0.0001
42
- - train_batch_size: 32
43
- - eval_batch_size: 32
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
- - training_steps: 25
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | F1 Weighted |
52
  |:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:-----------:|
53
- | 1.8454 | 0.0154 | 25 | 1.6260 | 0.5230 | 0.5155 | 0.5274 |
54
 
55
 
56
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.8627
20
+ - F1 Micro: 0.6194
21
+ - F1 Macro: 0.6193
22
+ - F1 Weighted: 0.6193
23
 
24
  ## Model description
25
 
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 0.0001
42
+ - train_batch_size: 16
43
+ - eval_batch_size: 16
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
+ - training_steps: 2001
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | F1 Weighted |
52
  |:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:-----------:|
53
+ | 0.9832 | 0.2548 | 2000 | 0.8627 | 0.6194 | 0.6193 | 0.6193 |
54
 
55
 
56
  ### Framework versions
adapter_config.json CHANGED
@@ -21,9 +21,9 @@
21
  "revision": null,
22
  "target_modules": [
23
  "q_proj",
 
24
  "k_proj",
25
- "o_proj",
26
- "v_proj"
27
  ],
28
  "task_type": "SEQ_CLS",
29
  "use_dora": false,
 
21
  "revision": null,
22
  "target_modules": [
23
  "q_proj",
24
+ "v_proj",
25
  "k_proj",
26
+ "o_proj"
 
27
  ],
28
  "task_type": "SEQ_CLS",
29
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c436c187cbabace0ab368aea37ad0c4711742d5d7324ad76d9c8544cf59472b4
3
  size 578881968
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a7ce785314115beac1aae1ab678454b2674d1c5038a9794150c26376427ccf4
3
  size 578881968
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3745867cf9811ba4f8713c7ef3ed9214e2a6decf9a602ecbc1d1be5e8770d17c
3
  size 4920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69ac6d1e5d379cf776d79ba2cf05b41561bc972eda7cb863d7c1b269b7d18c06
3
  size 4920