David-Egea commited on
Commit
00d43a0
1 Parent(s): 66c3cc0

End of training

Browse files
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: prajjwal1/bert-small
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ - precision
9
+ - recall
10
+ - f1
11
+ model-index:
12
+ - name: bert-small-phishing
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # bert-small-phishing
20
+
21
+ This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on the None dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.1006
24
+ - Accuracy: 0.9766
25
+ - Precision: 0.9713
26
+ - Recall: 0.9669
27
+ - F1: 0.9691
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 2e-05
47
+ - train_batch_size: 16
48
+ - eval_batch_size: 16
49
+ - seed: 42
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: linear
52
+ - num_epochs: 4
53
+ - mixed_precision_training: Native AMP
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
58
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
59
+ | 0.202 | 1.0 | 762 | 0.0941 | 0.9717 | 0.9728 | 0.9520 | 0.9623 |
60
+ | 0.077 | 2.0 | 1524 | 0.0964 | 0.9764 | 0.9757 | 0.9617 | 0.9686 |
61
+ | 0.0428 | 3.0 | 2286 | 0.0992 | 0.9786 | 0.9739 | 0.9695 | 0.9717 |
62
+ | 0.0248 | 4.0 | 3048 | 0.1006 | 0.9766 | 0.9713 | 0.9669 | 0.9691 |
63
+
64
+
65
+ ### Framework versions
66
+
67
+ - Transformers 4.38.2
68
+ - Pytorch 2.2.1+cu121
69
+ - Datasets 2.18.0
70
+ - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:29343ab218bd036108a9a16120d57bcfaabf294c7554887d4e9250fbf9752403
3
  size 115067048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e202f64067553b5ab4b11aaea308fb1822cd19d3f8b0de5a79a30e702b051d6b
3
  size 115067048
runs/Apr09_16-50-56_6023a4edcab7/events.out.tfevents.1712681490.6023a4edcab7.1052.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4043a6a59bd34dc67d5b7b5c41ab946f4e8dafb0fdde18681ec76c67f1f7e8ef
3
- size 6860
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5e33edf5b66dd168b1e767f2d257243c914a0cafe1d791a91e9ac7f6bd39fd3
3
+ size 8108