File size: 5,518 Bytes
ae96f17 1c5cbc7 ae96f17 2f0a0a7 b46f829 dd39bef 7730d34 d6db1fd b46f829 868b2a1 ea44e2f 868b2a1 5714201 263fdb6 1170e6e 375e655 5714201 375e655 5714201 1a9c458 c979d0d 5714201 1c96d57 5714201 295c552 5714201 5f47b9f b9bf2cc d6db1fd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: source
dtype: string
- name: file_name
dtype: string
- name: cwe
dtype: string
splits:
- name: train
num_bytes: 87854
num_examples: 76
download_size: 53832
dataset_size: 87854
---
# Static Analysis Eval Benchmark
A dataset of 76 Python programs taken from real Python open source projects (top 1000 on GitHub),
where each program is a file that has exactly 1 vulnerability as detected by a particular static analyzer (Semgrep).
You can run the `_script_for_eval.py` to check the results.
```
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python _script_for_eval.py
```
For all supported options, run with `--help`:
```
usage: _script_for_eval.py [-h] [--model MODEL] [--cache] [--n_shot N_SHOT] [--use_similarity]
Run Static Analysis Evaluation
options:
-h, --help show this help message and exit
--model MODEL OpenAI model to use
--cache Enable caching of results
--n_shot N_SHOT Number of examples to use for few-shot learning
--use_similarity Use similarity for fetching dataset examples
```
We need to use the logged in version of Semgrep to get access to more rules for vulnerability detection. So, make sure you login before running the eval script.
```
% semgrep login
API token already exists in /Users/user/.semgrep/settings.yml. To login with a different token logout use `semgrep logout`
```
After the run, the script will also create a log file which captures the stats for the run and the files that were fixed.
You can see an example [here](https://huggingface.co/datasets/patched-codes/static-analysis-eval/blob/main/gpt-4o-mini_semgrep_1.85.0_20240818_215254.log).
# Leaderboard
The top models on the leaderboard are all fine-tuned using the same dataset that we released called [synth vuln fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes).
You can read about our experience with fine-tuning them on our [blog](https://www.patched.codes/blog/a-comparative-study-of-fine-tuning-gpt-4o-mini-gemini-flash-1-5-and-llama-3-1-8b).
You can also explore the leaderboard with this [interactive visualization](https://claude.site/artifacts/5656c16d-9751-407c-9631-a3526c259354).
![Visualization of the leaderboard](./visualization.png)
| Model | StaticAnalysisEval (%) | Time (mins) | Price (USD) |
|:-------------------------:|:----------------------:|:-------------:|:-----------:|
| gpt-4o-mini-fine-tuned | 77.63 | 21:0 | 0.21 |
| gemini-1.5-flash-fine-tuned | 73.68 | 18:0 | |
| Llama-3.1-8B-Instruct-fine-tuned | 69.74 | 23:0 | |
| gpt-4o | 69.74 | 24:0 | 0.12 |
| gpt-4o-mini | 68.42 | 20:0 | 0.07 |
| gemini-1.5-flash-latest | 68.42 | 18:2 | 0.07 |
| Llama-3.1-405B-Instruct | 65.78 | 40:12 | |
| Llama-3-70B-instruct | 65.78 | 35:2 | |
| Llama-3-8B-instruct | 65.78 | 31.34 | |
| gemini-1.5-pro-latest | 64.47 | 34:40 | |
| gpt-4-1106-preview | 64.47 | 27:56 | 3.04 |
| gpt-4 | 63.16 | 26:31 | 6.84 |
| claude-3-5-sonnet-20240620| 59.21 | 23:59 | 0.70 |
| moa-gpt-3.5-turbo-0125 | 53.95 | 49:26 | |
| gpt-4-0125-preview | 53.94 | 34:40 | |
| patched-coder-7b | 51.31 | 45.20 | |
| patched-coder-34b | 46.05 | 33:58 | 0.87 |
| patched-mix-4x7b | 46.05 | 60:00+ | 0.80 |
| Mistral-Large | 40.80 | 60:00+ | |
| Gemini-pro | 39.47 | 16:09 | 0.23 |
| Mistral-Medium | 39.47 | 60:00+ | 0.80 |
| Mixtral-Small | 30.26 | 30:09 | |
| gpt-3.5-turbo-0125 | 28.95 | 21:50 | |
| claude-3-opus-20240229 | 25.00 | 60:00+ | |
| Llama-3-8B-instruct.Q4_K_M| 21.05 | 60:00+ | |
| Gemma-7b-it | 19.73 | 36:40 | |
| gpt-3.5-turbo-1106 | 17.11 | 13:00 | 0.23 |
| Codellama-70b-Instruct | 10.53 | 30.32 | |
| CodeLlama-34b-Instruct | 7.89 | 23:16 | |
The price is calcualted by assuming 1000 input and output tokens per call as all examples in the dataset are < 512 tokens (OpenAI cl100k_base tokenizer).
Some models timed out during the run or had intermittent API errors. We try each example 3 times in such cases. This is why some runs are reported to be longer than 1 hr (60:00+ mins).
If you want to add your model to the leaderboard, you can send in a PR to this repo with the log file from the evaluation run. |