W3SA - Solana Codebase Benchmark
Overview
This repository contains the code for the W3SA Benchmark for Solana. This benchmark targets Rust-based smart contracts deployed on the Solana blockchain. To capture a broad spectrum of vulnerabilities, the benchmark leverages two sources of bug data. Primarily, we incorporate audit-based findings from established security reports, which provide a reliable baseline of known vulnerabilities. Additionally, in select projects, we manually injected vulnerabilities to simulate edge-case scenarios. This dual approach not only tests the detection system against realistic, real-world issues but also challenges it with subtle, less obvious vulnerabilities that may only emerge in complex or atypical conditions.
Repo Structure
The benchmark contains two folders, a benchmark and an src folder. The benchmark folder has all the projects use for eval along with their audit findings in the ground_truth and the src folder contains the scripts used to generate these outputs, allowing for reproducibility and further analysis.
βββ README.md
βββ benchmark
β βββ config/
β βββ ground_truth/
β βββ repositories/
βββ src
βββ dataset_transformation.py
βββ eval.py
βββ experiments.py
βββ models.py
βββ prompts.py
βββ metrics.py
βββ radar
βββ radar_eval.py
βββ radar_metrics.py
Project Statistics
Project details with total number of vulnerabilities for each severity level
Project | # Scanned Files | # Audit Bugs | # Injected Bugs | Cri-sev | High-sev | Med-sev | Low/Info-Sev |
---|---|---|---|---|---|---|---|
Invariant Protocol | 51 | 16 | 6 | 3 | 2 | 6 | 11 |
Ellipsis Labs | 25 | 3 | 0 | 0 | 1 | 0 | 2 |
Synthetify | 8 | 6 | 4 | 0 | 1 | 6 | 3 |
Clone Protocol | 45 | 12 | 0 | 0 | 0 | 0 | 12 |
Haven | 33 | 5 | 0 | 0 | 2 | 1 | 2 |
Drift Protocol | 46 | 7 | 4 | 0 | 4 | 2 | 5 |
Port Sundial | 25 | 8 | 5 | 1 | 5 | 3 | 5 |
Detection Rate
Project | Radar | GPT-o3-mini | GPT-o1 | GPT-o1-mini | GPT-4o | Claude-3.5 |
---|---|---|---|---|---|---|
Invariant Protocol | 0.09 | 0.19 | 0.19 | 0.09 | 0.17 | 0.17 |
Ellipsis Labs | 0.0 | 0.0 | 0.34 | 0.0 | 0.3 | 0.33 |
Synthetify | 0.1 | 0.25 | 0.19 | 0.29 | 0.3 | 0.3 |
Clone Protocol | 0.09 | 0.34 | 0.09 | 0.19 | 0.1 | 0.33 |
Haven | 0.0 | 0.0 | 0.0 | 0.0 | 0.23 | 0.2 |
Drift | 0.09 | 0.28 | 0.19 | 0.27 | 0.11 | 0.44 |
Port Sundial | 0.08 | 0.22 | 0.29 | 0.26 | 0.32 | 0.23 |
Average | 0.064 | 0.182 | 0.184 | 0.157 | 0.213 | 0.285 |

The detection rate metrics across different models indicate that ALMX-1.5 model outperforms base AI models, demonstrating a 35% detection rate compared to 28.5% for claude-3.5-sonnet, 21% for gpt-4o, and 15.7% for o1-mini. These results highlight ALMX-1.5βs superior ability to detect vulnerabilities, particularly within complex projects. These serve as a clear indicator of the performance differences between different LLM models and traditional static analysis.
Set up
- Install
uv
package manager if not yet available - Run
uv sync
Run an experiment
- Set your
OPENAI_API_KEY
as environmental variable - Launch your experiment by running:
uv run experiment.py --model o3-mini
Contact Us
For or questions, suggestions, or to learn more about Almanax.ai, reach out to us at https://www.almanax.ai/contact
- Downloads last month
- 26