|
--- |
|
datasets: |
|
- Vanessasml/cybersecurity_32k_instruction_input_output |
|
pipeline_tag: text-generation |
|
tags: |
|
- finance |
|
- supervision |
|
- cyber risk |
|
- cybersecurity |
|
- cyber threats |
|
- SFT |
|
- LoRA |
|
- A100GPU |
|
--- |
|
# Model Card for Cyber-risk-llama-3-8b |
|
|
|
## Model Description |
|
This model is a fine-tuned version of `meta-llama/Meta-Llama-3-8B` on the `vanessasml/cybersecurity_32k_instruction_input_output` dataset. |
|
|
|
It is specifically designed to enhance performance in generating and understanding cybersecurity, identifying cyber threats and classifying data under the NIST taxonomy and IT Risks based on the ITC EBA guidelines. |
|
|
|
## Intended Use |
|
- **Intended users**: Data scientists and developers working on cybersecurity applications. |
|
- **Out-of-scope use cases**: This model should not be used for medical advice, legal decisions, or any life-critical systems. |
|
|
|
## Training Data |
|
The model was fine-tuned on `vanessasml/cybersecurity_32k_instruction_input_output`, a dataset focused on cybersecurity news analysis. |
|
No special data format was applied as [recommended](https://huggingface.co/blog/llama3#fine-tuning-with-%F0%9F%A4%97-trl) |
|
|
|
## Training Procedure |
|
- **Preprocessing**: Text data were tokenized using the tokenizer corresponding to the base model `meta-llama/Meta-Llama-3-8B`. |
|
- **Hardware**: The training was performed on GPUs with mixed precision (FP16/BF16) enabled. |
|
- **Optimizer**: Paged AdamW with a cosine learning rate schedule. |
|
- **Epochs**: The model was trained for 1 epoch. |
|
- **Batch size**: 4 per device, with gradient accumulation where required. |
|
|
|
## Evaluation Results |
|
Model evaluation was based on qualitative assessment of generated text relevance and coherence in the context of cybersecurity. |
|
|
|
## Quantization and Optimization |
|
- **Quantization**: 4-bit precision with type `nf4`. Nested quantization is disabled. |
|
- **Compute dtype**: `float16` to ensure efficient computation. |
|
- **LoRA Settings**: |
|
- LoRA attention dimension: `64` |
|
- Alpha parameter for LoRA scaling: `16` |
|
- Dropout in LoRA layers: `0.1` |
|
|
|
## Environmental Impact |
|
- **Compute Resources**: Training leveraged energy-efficient hardware and practices to minimize carbon footprint. |
|
- **Strategies**: Gradient checkpointing and group-wise data processing were used to optimize memory and power usage. |
|
|
|
## How to Use |
|
Here is how to load and use the model: |
|
|
|
```python |
|
model_id = "vanessasml/cyber-risk-llama-3-8b" |
|
|
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model_id, |
|
model_kwargs={"torch_dtype": torch.bfloat16}, |
|
device="cuda", |
|
) |
|
## Define your user prompt |
|
example_prompt_1=""" Question: What are the cyber threats present in the article? Explain why.\n |
|
Article: More than one million Brits over the age of 45 have fallen victim to some form of email-related fraud, \ |
|
as the internet supersedes the telephone as the favored channel for scammers, according to Aviva. \ |
|
The insurer polled over 1000 adults over the age of 45 in the latest update to its long-running Real Retirement Report. \ |
|
Further, 6% said they had actually fallen victim to such an online attack, amounting to around 1.2 million adults. |
|
""" |
|
example_prompt_2 = "What are the main 5 ITC EBA IT risks?" |
|
|
|
messages = [ |
|
{"role": "system", "content": "You are an IT supervisor from a supervisory institution."}, |
|
{"role": "user", "content": example_prompt_2}, |
|
] |
|
|
|
prompt = pipeline.tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=False, |
|
add_generation_prompt=True |
|
) |
|
|
|
terminators = [ |
|
pipeline.tokenizer.eos_token_id, |
|
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") |
|
] |
|
|
|
outputs = pipeline( |
|
prompt, |
|
max_new_tokens=500, |
|
eos_token_id=terminators, |
|
do_sample=True, |
|
temperature=0.1, |
|
top_p=0.9, |
|
) |
|
print(outputs[0]["generated_text"][len(prompt):]) |
|
``` |
|
|
|
```python |
|
## Example output |
|
As an IT supervisor from a supervisory institution, I can provide you with the main 5 ITC EBA IT risks that we focus on: |
|
|
|
1. **Availability Risk**: The risk that IT systems and services are not available when needed due to hardware or software failures, cyber-attacks, or other disruptions. This includes the risk of data loss or corruption, which can lead to financial losses, reputational damage, or even regulatory non-compliance. |
|
2. **Security Risk**: The risk of unauthorized access, use, disclosure, disruption, modification, or destruction of IT systems and data. This includes the risk of data breaches, cyber-attacks, and insider threats that can compromise sensitive information and disrupt business operations. |
|
3. **Confidentiality Risk**: The risk of unauthorized access or disclosure of sensitive information, including personal data, financial information, or intellectual property. This includes the risk of data breaches, data leaks, or unauthorized access to critical infrastructure. |
|
4. **Integrity Risk**: The risk of data being altered in an unauthorized or undetected manner, including changes to financial transactions or records. This includes the risk of data tampering, fraud, or manipulation that can lead to financial losses or reputational damage. |
|
5. **Compliance Risk**: The risk of non-adherence to regulatory requirements, industry standards, and organizational policies related to IT operations and security. This includes the risk of fines, penalties, or reputational damage due to non-compliance with data protection regulations, such as GDPR or PSD2. |
|
|
|
These risks are not mutually exclusive, and IT risks often overlap or intersect. As an IT supervisor, it's essential to consider these risks in our risk assessments and mitigation strategies to ensure the stability and security of the financial sector's IT infrastructure. |
|
``` |
|
|
|
## Limitations and Bias |
|
The model, while robust in cybersecurity contexts, may not generalize well to unrelated domains. Users should be cautious of biases inherent in the training data which may manifest in model predictions. |
|
|
|
|
|
## Citation |
|
If you use this model, please cite it as follows: |
|
|
|
```bibtex |
|
@misc{cyber-risk-llama-3-8b, |
|
author = {Vanessa Lopes}, |
|
title = {Cyber-risk-llama-3-8B Model}, |
|
year = {2024}, |
|
publisher = {HuggingFace Hub}, |
|
journal = {HuggingFace Model Hub} |
|
} |
|
``` |