viv6267 commited on
Commit
066d4e3
·
verified ·
1 Parent(s): ccc1353

Model save

Browse files
README.md CHANGED
@@ -1,11 +1,6 @@
1
  ---
2
  base_model: NousResearch/Llama-2-7b-hf
3
  library_name: peft
4
- metrics:
5
- - accuracy
6
- - precision
7
- - recall
8
- - f1
9
  tags:
10
  - generated_from_trainer
11
  model-index:
@@ -19,12 +14,6 @@ should probably proofread and complete it, then remove this comment. -->
19
  # Test_sagemaker
20
 
21
  This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the None dataset.
22
- It achieves the following results on the evaluation set:
23
- - Loss: 0.7043
24
- - Accuracy: 0.5053
25
- - Precision: 0.5009
26
- - Recall: 0.7339
27
- - F1: 0.5954
28
 
29
  ## Model description
30
 
@@ -52,16 +41,14 @@ The following hyperparameters were used during training:
52
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
53
  - lr_scheduler_type: linear
54
  - lr_scheduler_warmup_steps: 500
55
- - num_epochs: 3
56
  - mixed_precision_training: Native AMP
57
 
58
  ### Training results
59
 
60
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
61
  |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
62
- | No log | 1.0 | 55 | 0.9062 | 0.496 | 0.496 | 1.0 | 0.6631 |
63
- | No log | 2.0 | 110 | 0.7022 | 0.4787 | 0.4682 | 0.3763 | 0.4173 |
64
- | 0.902 | 2.9509 | 162 | 0.7043 | 0.5053 | 0.5009 | 0.7339 | 0.5954 |
65
 
66
 
67
  ### Framework versions
 
1
  ---
2
  base_model: NousResearch/Llama-2-7b-hf
3
  library_name: peft
 
 
 
 
 
4
  tags:
5
  - generated_from_trainer
6
  model-index:
 
14
  # Test_sagemaker
15
 
16
  This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the None dataset.
 
 
 
 
 
 
17
 
18
  ## Model description
19
 
 
41
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_steps: 500
44
+ - num_epochs: 1
45
  - mixed_precision_training: Native AMP
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
50
  |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
51
+ | No log | 0.9874 | 54 | 0.7080 | 0.5133 | 0.5101 | 0.4731 | 0.4909 |
 
 
52
 
53
 
54
  ### Framework versions
fine_tuned_model/adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d533f9479872ac0f0d98e6cefe9498dca4b09afc0ea734f91467d5ac3fa4f53e
3
  size 16827064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9e3b80ea373b2cf552c8d89eb4372efaa982bc37568f6341079ef30c866b763
3
  size 16827064
fine_tuned_model/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fa69a02b8171f91250d7622fa6573d02206029f3ed9d93f104704544035d0223
3
  size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d66837b4e6a65741781516ce776cd1a3a18590392cf5ef44f68d669ad0125aa9
3
  size 5432