Model Card for Model ID
Model Details
Model Description
The model is a neural network architecture specifically designed to detect domain generation algorithm (DGA) domains. DGA domains are often used by malware to generate random domain names for communication. This model ensures effective identification and classification of such domains.
Key Features:
1. Input Layer: Accepts sequences with a maximum length of 45 characters.
2. Embedding Layer: Converts input characters into dense vector representations of size 100.
3. Conv1D Layer: Applies 256 filters with a kernel size of 4 to extract features, followed by ReLU activation for non-linearity.
4. Flatten Layer: Transforms the multi-dimensional tensor into a 1D array for further processing.
5. Dense Layer 1: Contains 512 units with ReLU activation to learn high-level patterns.
6. Dense Layer 2: A final layer with 1 unit and a sigmoid activation for binary classification, predicting whether a domain is generated by a DGA.
- Developed by: noobpk
Uses
The model is designed to be used directly for identifying domains generated by domain generation algorithms (DGAs), which are often associated with malicious software. This includes applications in:
Direct Use
Cybersecurity Tools: Integrating the model into systems to detect and block potentially harmful domains.
Network Traffic Monitoring: Assisting in real-time analysis to identify abnormal patterns.
Educational and Research Purposes: Understanding DGA behavior and improving algorithms for detecting them.
Out-of-Scope Use
The model should not be used for:
Critical Systems: Reliance on the model without complementary systems for sensitive environments, such as financial transactions.
Malicious Intent: Using the model to target or exploit cybersecurity vulnerabilities.
Non-DGA Detection: The model may perform poorly in tasks unrelated to DGA detection, such as detecting phishing or legitimate domain validation.
Bias, Risks, and Limitations
False Positives/Negatives: The model may misclassify legitimate domains or fail to identify certain DGA domains, leading to potential disruptions or security risks.
Bias in Training Data: If the training data is not representative of all DGA and non-DGA domains, the model's effectiveness may vary across different networks and datasets.
Dependency on Sequence Length: The model is optimized for input sequences of 45 characters; domains outside this range may affect its performance.
Evolving Threats: As DGAs develop more sophisticated techniques, the model may require frequent retraining to adapt to new patterns.
Recommendations
To minimize risks:
Regularly update the model with new data reflecting evolving DGA techniques.
Employ this model alongside other cybersecurity measures to enhance its effectiveness.
Validate the model's output in diverse network environments to ensure reliability.
How to Get Started with the Model
Use the code below to get started with the model.
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
from huggingface_hub import hf_hub_download
from tensorflow.keras.models import load_model
from keras.preprocessing.sequence import pad_sequences
def load_modeler():
local_model_path = hf_hub_download(
repo_id="noobpk/dga-detection",
filename="model.h5"
)
return load_model(local_model_path)
model = load_modeler()
valid_characters = "$abcdefghijklmnopqrstuvwxyz0123456789-_."
tokens = {char: idx for idx, char in enumerate(valid_characters)}
if __name__ == "__main__":
payload = input("Enter payload: ")
print("Processing payload...")
# Convert domain to lowercase and encode it
payload_encoded = [tokens[char] for char in payload.lower() if char in tokens]
# Pad and truncate the sequence
domain_encoded = pad_sequences([payload_encoded], maxlen=45, padding='post', truncating='post')
# Make prediction
prediction = model.predict(domain_encoded)
accuracy = float(prediction[0][0] * 100)
print(f"Accuracy: {accuracy}")
Training Details
Training Data
Dataset: dga-detection
- Using 70% for training data
Evaluation
Testing Data, Factors & Metrics
Testing Data
Dataset: dga-detection
- Using 30% for training data
Metrics
- precision
- f1-score
- recall
- accuracy
Results
29704/29704 [==============================] - 82s 3ms/step - loss: 0.0249 - accuracy: 0.9917
29704/29704 [==============================] - 54s 2ms/step
Accuracy: 99.17%
precision recall f1-score support
0 0.99 0.99 0.99 478072
1 0.99 0.99 0.99 472448
accuracy 0.99 950520
macro avg 0.99 0.99 0.99 950520
weighted avg 0.99 0.99 0.99 950520
Compute Infrastructure
- Google Colab Pro
Software
- Jupiter Notebook
Model Card Authors
Model Card Contact
- Downloads last month
- 0