Model Card for Model ID

Model Details

Model Description

This is the model is a fine-tuned version of google-bert/bert-base-uncased on the phishing-site-classification dataset

Model Sources

Evaluation

Training Results

Epoch Training Loss Step Validation Loss Accuracy AUC Learning Rate
1 0.4932 263 0.4237 0.789 0.912 0.00019
2 0.3908 526 0.3761 0.824 0.932 0.00018
3 0.3787 789 0.3136 0.860 0.941 0.00017
4 0.3606 1052 0.4401 0.818 0.944 0.00016
5 0.3545 1315 0.2928 0.864 0.947 0.00015
6 0.3600 1578 0.3406 0.867 0.949 0.00014
7 0.3233 1841 0.2897 0.869 0.950 0.00013
8 0.3411 2104 0.3328 0.871 0.949 0.00012
9 0.3292 2367 0.3189 0.876 0.954 0.00011
10 0.3239 2630 0.3685 0.849 0.956 0.00010
11 0.3201 2893 0.3317 0.862 0.956 0.00009
12 0.3335 3156 0.2725 0.869 0.957 0.00008
13 0.3230 3419 0.2856 0.882 0.955 0.00007
14 0.3087 3682 0.2900 0.882 0.957 0.00006
15 0.3050 3945 0.2704 0.893 0.957 0.00005
16 0.3032 4208 0.2662 0.878 0.957 0.00004
17 0.3027 4471 0.2930 0.882 0.956 0.00003
18 0.2950 4734 0.2707 0.880 0.957 0.00002
19 0.2998 4997 0.2782 0.884 0.957 0.00001
20 0.2971 5260 0.2792 0.882 0.957 0.00000

Final Training Summary

  • Total Training Runtime: 555.4381 seconds
  • Final Training Loss: 0.3372
  • Train Samples per Second: 75.616
  • Eval Accuracy (Best Epoch): 0.893 (Epoch 15)
  • Eval AUC (Best Epoch): 0.957 (Multiple Epochs)
Downloads last month
16
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for dhruvyadav89300/BERT-phishing-classifier

Finetuned
(2310)
this model

Dataset used to train dhruvyadav89300/BERT-phishing-classifier