codechrl commited on
Commit
affd850
·
verified ·
1 Parent(s): aafaf2e

Training update: 159,611/161,610 rows (98.76%) | +4 new @ 2025-11-08 07:04:18

Browse files
Files changed (4) hide show
  1. README.md +5 -5
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
  4. training_metadata.json +6 -6
README.md CHANGED
@@ -25,7 +25,7 @@ pipeline_tag: fill-mask
25
  - Model type: fine-tuned lightweight BERT variant
26
  - Languages: English & Indonesia
27
  - Finetuned from: `boltuix/bert-micro`
28
- - Status: **Early version** — trained on **98.77%** of planned data.
29
 
30
  **Model sources**
31
  - Base model: [boltuix/bert-micro](https://huggingface.co/boltuix/bert-micro)
@@ -51,7 +51,7 @@ You can use this model to classify cybersecurity-related text — for example, w
51
  - Early classification of SIEM alert & events.
52
 
53
  ## 3. Bias, Risks, and Limitations
54
- Because the model is based on a small subset (98.77%) of planned data, performance is preliminary and may degrade on unseen or specialized domains (industrial control, IoT logs, foreign language).
55
  - Inherits any biases present in the base model (`boltuix/bert-micro`) and in the fine-tuning data — e.g., over-representation of certain threat types, vendor or tooling-specific vocabulary.
56
  - **Should not be used as sole authority for incident decisions; only as an aid to human analysts.**
57
 
@@ -75,9 +75,9 @@ Since cybersecurity data often contains lengthy alert descriptions and execution
75
  - **LR scheduler**: Linear with warmup
76
 
77
  ### Training Data
78
- - **Total database rows**: 161,602
79
- - **Rows processed (cumulative)**: 159,607 (98.77%)
80
- - **Training date**: 2025-11-08 05:19:56
81
 
82
  ### Post-Training Metrics
83
  - **Final training loss**:
 
25
  - Model type: fine-tuned lightweight BERT variant
26
  - Languages: English & Indonesia
27
  - Finetuned from: `boltuix/bert-micro`
28
+ - Status: **Early version** — trained on **98.76%** of planned data.
29
 
30
  **Model sources**
31
  - Base model: [boltuix/bert-micro](https://huggingface.co/boltuix/bert-micro)
 
51
  - Early classification of SIEM alert & events.
52
 
53
  ## 3. Bias, Risks, and Limitations
54
+ Because the model is based on a small subset (98.76%) of planned data, performance is preliminary and may degrade on unseen or specialized domains (industrial control, IoT logs, foreign language).
55
  - Inherits any biases present in the base model (`boltuix/bert-micro`) and in the fine-tuning data — e.g., over-representation of certain threat types, vendor or tooling-specific vocabulary.
56
  - **Should not be used as sole authority for incident decisions; only as an aid to human analysts.**
57
 
 
75
  - **LR scheduler**: Linear with warmup
76
 
77
  ### Training Data
78
+ - **Total database rows**: 161,610
79
+ - **Rows processed (cumulative)**: 159,611 (98.76%)
80
+ - **Training date**: 2025-11-08 07:04:18
81
 
82
  ### Post-Training Metrics
83
  - **Final training loss**:
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0328f3d2ac22c2c6e82435bcd2ab90ba8a88ac8a3e437b8176fe449ace055bb2
3
  size 17671560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b6fa1d2b6d9153d221430edce6cf643fd80ecdfd2a993ebcde81ffb2c79d012
3
  size 17671560
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0559576248bf2f9a92acf72375468272324d391be2571700a6524d12c702ca07
3
  size 5905
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1b9c7354593a0db9ead8118921aefce91f1f8409be2983e6ac8580322c8a883
3
  size 5905
training_metadata.json CHANGED
@@ -1,11 +1,11 @@
1
  {
2
- "trained_at": 1762579196.4018416,
3
- "trained_at_readable": "2025-11-08 05:19:56",
4
- "samples_this_session": 888,
5
  "new_rows_this_session": 4,
6
- "trained_rows_total": 159607,
7
- "total_db_rows": 161602,
8
- "percentage": 98.76548557567358,
9
  "final_loss": 0,
10
  "epochs": 3,
11
  "learning_rate": 5e-05,
 
1
  {
2
+ "trained_at": 1762585458.763325,
3
+ "trained_at_readable": "2025-11-08 07:04:18",
4
+ "samples_this_session": 1393,
5
  "new_rows_this_session": 4,
6
+ "trained_rows_total": 159611,
7
+ "total_db_rows": 161610,
8
+ "percentage": 98.76307159210444,
9
  "final_loss": 0,
10
  "epochs": 3,
11
  "learning_rate": 5e-05,