Text Classification
Transformers
English
distilbert
code

Model Card for DDoSBert

The imperative for robust detection mechanisms has grown in the face of increasingly sophisticated Distributed Denial of Service (DDoS) attacks. This paper introduces DDoSBERT, an innovative approach harnessing transformer text classification for DDoS detection. The methodology conducts a detailed exploration of feature selection methods, emphasizing the selection of critical techniques, including Correlation, Mutual Information, and Univariate Feature Selection. Motivated by the dynamic landscape of DDoS attacks, DDoSBERT confronts contemporary challenges such as binary and multi-attack classification and imbalance attack classification. The methodology delves into diverse text transformation techniques for feature selection and employs three transformer classification models: distilbert-base-uncased, prunebert-base-uncased-6-finepruned-w-distil-mnli, and distilbert-base-uncased-finetuned-sst-2-english. Additionally, the paper outlines a comprehensive framework for assessing the importance of features in the context of five DDoS datasets, comprised of APA-DDoS, CRCDDoS2022, DDoS Attack SDN, CIC-DDoS-2019, and BCCC-cPacket-Cloud-DDoS-2024 datasets. The experimental results, rigorously evaluated against relevant benchmarks, affirm the efficacy of DDoSBERT, underscoring its significance in enhancing the resilience of systems against text-based transformation DDoS attacks. The discussion section interprets the results, highlights the implications of the findings, and acknowledges limitations while suggesting avenues for future research.

Our DDoSBert model was trained on DDoS Dataset based on based_model: distilbert/distilbert-base-uncased-finetuned-sst-2-english

Training Data

Please refer at: https://huggingface.co/datasets/Thi-Thu-Huong/Comprehensive_Feature_Extraction_DDoS_Datasets/tree/main

How to use

(1) Use a pipeline as a high-level helper

from transformers import pipeline

pipe = pipeline("text-classification", model="Thi-Thu-Huong/DDoSBert")

(2) Load model directly

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("Thi-Thu-Huong/DDoSBert")

model = AutoModelForSequenceClassification.from_pretrained("Thi-Thu-Huong/DDoSBert")

Testing Data

Please refer at: https://huggingface.co/datasets/Thi-Thu-Huong/Comprehensive_Feature_Extraction_DDoS_Datasets/tree/main/4-CICDDoS2019%20Feature%20Extraction%20DB

Results

Testing Accuracy: 0.9944005270092227

Overall

  • Precision: 0.9944;
  • Recall: 0.9944;
  • F1 Score: 0.9944

Citation [optional]

Le, T.T.H., Heo, S., Cho, J., & Kim, H. (2025). DDoSBERT: Fine-tuning variant text classification bidirectional encoder representations from transformers for DDoS detection. Computer Networks, 111150.

BibTeX:

[1] @article{LE2025111150, title = {DDoSBERT: Fine-tuning variant text classification bidirectional encoder representations from transformers for DDoS detection}, journal = {Computer Networks}, volume = {262}, pages = {111150}, year = {2025}, issn = {1389-1286}, doi = {https://doi.org/10.1016/j.comnet.2025.111150}, url = {https://www.sciencedirect.com/science/article/pii/S1389128625001185}, author = {Thi-Thu-Huong Le and Shinwook Heo and Jaehan Cho and Howon Kim}, keywords = {DDoS(distributed denial of service), IDS(intrusion detection system), Text classification, Fine-tuning, BERT(bidirectional encoder representations from transformers)} }

[2] @misc {le_2025, author = { {Le} }, title = { Comprehensive_Feature_Extraction_DDoS_Datasets (Revision 1dafba0) }, year = 2025, url = { https://huggingface.co/datasets/Thi-Thu-Huong/Comprehensive_Feature_Extraction_DDoS_Datasets }, doi = { 10.57967/hf/4812 }, publisher = { Hugging Face } }

Model Card Contact

Email: lehuong7885@gmail.com

Downloads last month
51
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Thi-Thu-Huong/DDoSBert

Dataset used to train Thi-Thu-Huong/DDoSBert