keylazy commited on
Commit
50b09df
1 Parent(s): ee46089

End of training

Browse files
README.md CHANGED
@@ -2,11 +2,6 @@
2
  base_model: keylazy/Llama-2-7b-chat-hf-ark
3
  tags:
4
  - generated_from_trainer
5
- metrics:
6
- - accuracy
7
- - precision
8
- - recall
9
- - f1
10
  model-index:
11
  - name: Llama-2-7b-chat-hf-ark-ft-2
12
  results: []
@@ -19,11 +14,16 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [keylazy/Llama-2-7b-chat-hf-ark](https://huggingface.co/keylazy/Llama-2-7b-chat-hf-ark) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.5956
23
- - Accuracy: 0.695
24
- - Precision: 0.6965
25
- - Recall: 0.695
26
- - F1: 0.6946
 
 
 
 
 
27
 
28
  ## Model description
29
 
@@ -53,18 +53,6 @@ The following hyperparameters were used during training:
53
  - num_epochs: 3
54
  - mixed_precision_training: Native AMP
55
 
56
- ### Training results
57
-
58
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
59
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
60
- | 0.5159 | 0.5 | 39 | 0.5348 | 0.657 | 0.6674 | 0.657 | 0.6522 |
61
- | 0.4738 | 1.0 | 78 | 0.5245 | 0.641 | 0.6841 | 0.641 | 0.6201 |
62
- | 0.3097 | 1.5 | 117 | 0.5485 | 0.67 | 0.6822 | 0.67 | 0.6650 |
63
- | 0.2708 | 2.0 | 156 | 0.5485 | 0.686 | 0.6988 | 0.686 | 0.6814 |
64
- | 0.1635 | 2.5 | 195 | 0.5949 | 0.69 | 0.6946 | 0.69 | 0.6885 |
65
- | 0.1482 | 3.0 | 234 | 0.5956 | 0.695 | 0.6965 | 0.695 | 0.6946 |
66
-
67
-
68
  ### Framework versions
69
 
70
  - Transformers 4.35.0
 
2
  base_model: keylazy/Llama-2-7b-chat-hf-ark
3
  tags:
4
  - generated_from_trainer
 
 
 
 
 
5
  model-index:
6
  - name: Llama-2-7b-chat-hf-ark-ft-2
7
  results: []
 
14
 
15
  This model is a fine-tuned version of [keylazy/Llama-2-7b-chat-hf-ark](https://huggingface.co/keylazy/Llama-2-7b-chat-hf-ark) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - eval_loss: 0.1582
18
+ - eval_accuracy: 0.9587
19
+ - eval_precision: 0.9587
20
+ - eval_recall: 0.9587
21
+ - eval_f1: 0.9587
22
+ - eval_runtime: 270.3757
23
+ - eval_samples_per_second: 739.711
24
+ - eval_steps_per_second: 46.232
25
+ - epoch: 1.92
26
+ - step: 27053
27
 
28
  ## Model description
29
 
 
53
  - num_epochs: 3
54
  - mixed_precision_training: Native AMP
55
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  ### Framework versions
57
 
58
  - Transformers 4.35.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f0e642efef15e0fce600a97a8d10cb64e7eb358beebb69c2723e712039ecea5e
3
  size 398865392
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e91f1b0b439210eaf47fa08649f4fea916c3bb7282af422297c5feaecf3c396f
3
  size 398865392
runs/Nov11_04-02-39_30403274ce12/events.out.tfevents.1699675442.30403274ce12.2583.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:eb5b4114ee323f835c9b63dc21129eed01bd030e669072943eac309948254735
3
- size 8644
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1298c02fa57645538bac0e0c1bd4bcfb6c8c404dacd3be42fb4313bfb54003f
3
+ size 9765