Vanessasml commited on
Commit
9418de2
1 Parent(s): 3e7e130

Added model card

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - Vanessasml/cyber-reports-news-analysis-llama2-3k
4
+ pipeline_tag: question-answering
5
+ tags:
6
+ - finance
7
+ - supervision
8
+ - cyber risk
9
+ - cybersecurity
10
+ - cyber threats
11
+ - SFT
12
+ - LoRA
13
+ - A100GPU
14
+ ---
15
+ # Model Card for Llama-2-7B-SFT-LoRa-4bit-Float16
16
+
17
+ ## Model Description
18
+ This model is a fine-tuned version of `NousResearch/Llama-2-7b-chat-hf` on the `vanessasml/cyber-reports-news-analysis-llama2-3k` dataset.
19
+
20
+ It is specifically designed to enhance performance in generating and understanding cybersecurity, identifying cyber threats and classifying data under the NIST taxonomy and IT Risks based on the ITC EBA guidelines.
21
+
22
+ ## Intended Use
23
+ - **Intended users**: Data scientists and developers working on cybersecurity applications.
24
+ - **Out-of-scope use cases**: This model should not be used for medical advice, legal decisions, or any life-critical systems.
25
+
26
+ ## Training Data
27
+ The model was fine-tuned on `vanessasml/cyber-reports-news-analysis-llama2-3k`, a dataset focused on cybersecurity news analysis.
28
+
29
+ ## Training Procedure
30
+ - **Preprocessing**: Text data were tokenized using the tokenizer corresponding to the base model `NousResearch/Llama-2-7b-chat-hf`.
31
+ - **Hardware**: The training was performed on GPUs with mixed precision (FP16/BF16) enabled.
32
+ - **Optimizer**: Paged AdamW with a cosine learning rate schedule.
33
+ - **Epochs**: The model was trained for 1 epoch.
34
+ - **Batch size**: 4 per device, with gradient accumulation where required.
35
+
36
+ ## Evaluation Results
37
+ Model evaluation was based on qualitative assessment of generated text relevance and coherence in the context of cybersecurity.
38
+
39
+ ## Quantization and Optimization
40
+ - **Quantization**: 4-bit precision with type `nf4`. Nested quantization is disabled.
41
+ - **Compute dtype**: `float16` to ensure efficient computation.
42
+ - **LoRA Settings**:
43
+ - LoRA attention dimension: `64`
44
+ - Alpha parameter for LoRA scaling: `16`
45
+ - Dropout in LoRA layers: `0.1`
46
+
47
+ ## Environmental Impact
48
+ - **Compute Resources**: Training leveraged energy-efficient hardware and practices to minimize carbon footprint.
49
+ - **Strategies**: Gradient checkpointing and group-wise data processing were used to optimize memory and power usage.
50
+
51
+ ## How to Use
52
+ Here is how to load and use the model:
53
+
54
+ ```python
55
+ from transformers import AutoModelForCausalLM, AutoTokenizer
56
+
57
+ model_name = "llama-2-7b-sft-lora-4bit-float16"
58
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
59
+ model = AutoModelForCausalLM.from_pretrained(model_name)
60
+
61
+ # Example of how to use the model:
62
+ prompt = """Question: What are the cyber threads present in the article?
63
+ Article: More than one million Brits over the age of 45 have fallen victim to some form of email-related fraud, \
64
+ as the internet supersedes the telephone as the favored channel for scammers, according to Aviva. \
65
+ The insurer polled over 1000 adults over the age of 45 in the latest update to its long-running Real Retirement Report. \
66
+ Further, 6% said they had actually fallen victim to such an online attack, amounting to around 1.2 million adults. \
67
+ Some 22% more people it surveyed had been targeted by ...
68
+ """
69
+ pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=2000)
70
+ # To generate text:
71
+ result = pipe(f"<s>[INST] {prompt} [/INST]")
72
+ print(result[0]['generated_text'])
73
+ ```
74
+
75
+ ## Limitations and Bias
76
+ The model, while robust in cybersecurity contexts, may not generalize well to unrelated domains. Users should be cautious of biases inherent in the training data which may manifest in model predictions.
77
+
78
+
79
+ ## Citation
80
+ If you use this model, please cite it as follows:
81
+
82
+ ```bibtex
83
+ @misc{llama-2-7b-sft-lora-4bit-float16,
84
+ author = {Vanessa Lopes},
85
+ title = {Llama-2-7B-SFT-LoRa-4bit-Float16 Model},
86
+ year = {2024},
87
+ publisher = {HuggingFace Hub},
88
+ journal = {HuggingFace Model Hub}
89
+ }
90
+ ```