File size: 2,175 Bytes
7c3c4c9
 
 
 
 
 
 
 
 
e1932a8
29e19bf
7c3c4c9
e1932a8
 
 
 
 
 
 
 
 
 
 
 
 
6d72d02
e1932a8
 
 
6d72d02
e1932a8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- code
pretty_name: CVE-LLm-Dataset
data_source: Custom data collected from the CVE database
data_formats: JSONL
---
# CVE-llm_dataset
This dataset is intended to train an LLM model for an utterly CVE-focused input and output.

## Data extraction:
For the data extraction, I first downloaded the CVE database from NVD lists and then loaded them using the `cve_dataset_2.py` and `cve_dataset.py` both have produce different datasets one is for llama and the other is for openai GPT.

The CVE json files are mapped in this format:
```
cves:
|
├─1999
|   ├─0xxx
|   |   ├─CVE-1999-0001.json
|   |   ├─....
|   |   └─CVE-1999-0999.json
|   └─1xxx
|      ├─CVE-1999-1000.json
|      ├─....
|      └─CVE-1999-1598.json
└─2023

``` 
The programs traverse through these folders extract the data in the files and arrange them into usable formats for the fine-tuning process.

## llama2 Model dataset:
The llama2 fine-tuned dataset follows this format:
```
    {
        "instruction": "Explain CVE-1999-0001",
        "input": "Explain the vulnerability: CVE-1999-0001",
        "output": "ip_input.c in BSD-derived TCP/IP implementations allows remote attackers to cause a denial of service (crash or hang) via crafted packets.\nAffected Products: n/a\nReferences: [{'tags': ['x_refsource_CONFIRM'], 'url': 'http://www.openbsd.org/errata23.html#tcpfix'}, {'name': '5707', 'tags': ['vdb-entry', 'x_refsource_OSVDB'], 'url': 'http://www.osvdb.org/5707'}]\nCVE State: PUBLISHED"
    }
```
The instruction is what we instruct the AI to do with the data provided For example we can command the AI `To take in user input analyze it and then based on what he asks returns an answer` This is also where we can add a `role` or a `personal` to the AI.

The input is the user Input of the main query or data that must be processed by the AI. This is a crucial piece of information that the AI will process in order to provide an output.

The output is the format that we define and tell the AI to generate answers in that format or provide that answer to the question asked.