Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
codelion's picture
Update README.md
91c5e80 verified
|
raw
history blame
9.38 kB
metadata
license: apache-2.0
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: source
      dtype: string
    - name: file_name
      dtype: string
    - name: cwe
      sequence: string
  splits:
    - name: train
      num_bytes: 1015823
      num_examples: 113
  download_size: 405079
  dataset_size: 1015823

New Version of Static Analysis Eval (Aug 20, 2024)

We have created a new version of the benchmark with instances that are harder than the previous one. There has been a lot of progress in models over the last year as a result the previous version of the benchmark was saturated. The methodology is the same, we have also released the dataset generation script which scans the top 100 Python projects to generate the instances. You can see it here. The same eval script works as before. You do not need to login to Semgrep anymore as we only use their OSS rules for this version of the benchmark.

The highest score a model can get on this benchmark is 100%, you can see the oracle run logs here.

New Evaluation

Model Score Logs
gpt-4o-mini 52.21 link
gpt-4o-mini + 3-shot prompt 53.10 link
gpt-4o-mini + rag (embedding & reranking) 58.41 link
gpt-4o-mini + fine-tuned with synth-vuln-fixes 53.98 link
Model Score Logs
gpt-4o 53.10 link
gpt-4o + 3-shot prompt 53.98 link
gpt-4o + rag (embedding & reranking) 56.64 link
gpt-4o + fine-tuned with synth-vuln-fixes 61.06 link

Mixture of Agents (MOA)

We also benchmarked gpt-4o with Patched MOA. This demostrates that an inference optimization technique like MOA can improve performance without fine-tuning.

Model Score Logs
gpt-4o-moa + 3-shot prompt 60.18 link
gpt-4o-moa + rag (embedding & reranking) 61.06 link

Static Analysis Eval Benchmark

A dataset of 76 Python programs taken from real Python open source projects (top 100 on GitHub), where each program is a file that has exactly 1 vulnerability as detected by a particular static analyzer (Semgrep).

You can run the _script_for_eval.py script to check the results.

python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python _script_for_eval.py

For all supported options, run with --help:

usage: _script_for_eval.py  [-h] [--model MODEL] [--cache] [--n_shot N_SHOT] [--use_similarity] [--oracle]

Run Static Analysis Evaluation

options:
  -h, --help        show this help message and exit
  --model MODEL     OpenAI model to use
  --cache           Enable caching of results
  --n_shot N_SHOT   Number of examples to use for few-shot learning
  --use_similarity  Use similarity for fetching dataset examples
  --oracle          Run in oracle mode (assume all vulnerabilities are fixed)

We need to use the logged in version of Semgrep to get access to more rules for vulnerability detection. So, make sure you login before running the eval script.

% semgrep login
API token already exists in /Users/user/.semgrep/settings.yml. To login with a different token logout use `semgrep logout`

After the run, the script will also create a log file which captures the stats for the run and the files that were fixed. You can see an example here. Due to the recent versions of Semgrep not detecting a few of the samples in the dataset as vulnerable anymore, the maximum score possible on the benchmark is 77.63%. You can see the oracle run log here.

Evaluation

We did some detailed evaluations recently (19/08/2024):

Model Score Logs
gpt-4o-mini 67.11 link
gpt-4o-mini + 3-shot prompt 71.05 link
gpt-4o-mini + rag (embedding & reranking) 72.37 link
gpt-4o-mini + fine-tuned with synth-vuln-fixes 77.63 link
Model Score Logs
gpt-4o 68.42 link
gpt-4o + 3-shot prompt 77.63 link
gpt-4o + rag (embedding & reranking) 77.63 link
gpt-4o + fine-tuned with synth-vuln-fixes 77.63 link

Leaderboard

The top models on the leaderboard are all fine-tuned using the same dataset that we released called synth vuln fixes. You can read about our experience with fine-tuning them on our blog. You can also explore the leaderboard with this interactive visualization. Visualization of the leaderboard

Model StaticAnalysisEval (%) Time (mins) Price (USD)
gpt-4o-mini-fine-tuned 77.63 21:0 0.21
gemini-1.5-flash-fine-tuned 73.68 18:0
Llama-3.1-8B-Instruct-fine-tuned 69.74 23:0
gpt-4o 69.74 24:0 0.12
gpt-4o-mini 68.42 20:0 0.07
gemini-1.5-flash-latest 68.42 18:2 0.07
Llama-3.1-405B-Instruct 65.78 40:12
Llama-3-70B-instruct 65.78 35:2
Llama-3-8B-instruct 65.78 31.34
gemini-1.5-pro-latest 64.47 34:40
gpt-4-1106-preview 64.47 27:56 3.04
gpt-4 63.16 26:31 6.84
claude-3-5-sonnet-20240620 59.21 23:59 0.70
moa-gpt-3.5-turbo-0125 53.95 49:26
gpt-4-0125-preview 53.94 34:40
patched-coder-7b 51.31 45.20
patched-coder-34b 46.05 33:58 0.87
patched-mix-4x7b 46.05 60:00+ 0.80
Mistral-Large 40.80 60:00+
Gemini-pro 39.47 16:09 0.23
Mistral-Medium 39.47 60:00+ 0.80
Mixtral-Small 30.26 30:09
gpt-3.5-turbo-0125 28.95 21:50
claude-3-opus-20240229 25.00 60:00+
Llama-3-8B-instruct.Q4_K_M 21.05 60:00+
Gemma-7b-it 19.73 36:40
gpt-3.5-turbo-1106 17.11 13:00 0.23
Codellama-70b-Instruct 10.53 30.32
CodeLlama-34b-Instruct 7.89 23:16

The price is calcualted by assuming 1000 input and output tokens per call as all examples in the dataset are < 512 tokens (OpenAI cl100k_base tokenizer).

Some models timed out during the run or had intermittent API errors. We try each example 3 times in such cases. This is why some runs are reported to be longer than 1 hr (60:00+ mins).

If you want to add your model to the leaderboard, you can send in a PR to this repo with the log file from the evaluation run.