Datasets:

Languages:
English
ArXiv:
License:
CoRe / README.md
danningx's picture
Update README.md
6d2437b verified
metadata
license: apache-2.0
task_categories:
  - question-answering
language:
  - en
tags:
  - LLM4code
  - code_reasoning
  - neurips25
size_categories:
  - 10K<n<100K

CoRe: Benchmarking LLMs’ Code Reasoning Capabilities through Static Analysis Tasks

This repository hosts the CoRe benchmark, designed to evaluate the reasoning capabilities of large language models on program analysis tasks including data dependency, control dependency, and information flow. Each task instance is represented as a structured JSON object with detailed metadata for evaluation and reproduction.

It contains 25k data points (last update: Sep. 24th, 2025).

Each example is a JSON object with the following fields:

{
  "label_file": "codenet_p00496_s700056700_main_12_40.yaml",
  "code_file": "codenet_p00496_s700056700_main_12_40.c",
  "pid": "p00496",
  "sid": "s700056700",
  "funname": "main",
  "start": 12,
  "end": 40,
  "dataset": "codenet",
  "language": "C",
  "src": 30,
  "dst": 33,
  "groundtruth": true,
  "task_id": "control_codenet_p00496_s700056700_main_12_40_k_33_1",
  "prompt": "..."
  "category": trace/all_source
}

🏷 Category Field

The category field specifies the type of prompt associated with each task instance:

  • trace: The prompt asks the model to produce a dependency trace if the answer is yes (e.g., the control or data dependency exists).
  • all_source: The prompt asks the model to enumerate all source elements involved in the dependency.

🧩 Field Descriptions

Field Description
label_file Path to the YAML file containing ground truth annotations for the current task instance.
code_file Path to the corresponding C/Java/Python source code file.
pid Problem ID from the original source dataset (e.g., CodeNet or GCJ).
sid Solution ID identifying the specific program implementation.
funname Name of the target function in which the analysis is conducted.
start, end Line numbers defining the start and end of the target function.
dataset Original dataset source (codenet or gcj).
language Programming language of the source file (C, Java, Python).
src, dst Defines the two program elements queried in this task. In control dependency, these are line numbers. In data dependency and information flow, they are structured as ["varname", line_no], representing variable instances.
groundtruth Boolean indicating whether the specified dependency relationship holds (i.e., true if src has the given dependency on dst).
task_id A unique ID for the task instance. The prefix (control_, data_, infoflow_) identifies the task type.
prompt The prompt string used in the experiment for this task instance. It includes the instruction, examples, query, and code context provided to the LLM. Content-specific fields (e.g., source/target names, line numbers) are filled into a standardized prompt template.

📚 Task Types

The benchmark contains three types of program reasoning tasks:

  • control: Control dependency between lines.
  • data: Data dependency between variables.
  • infoflow: Information flow (explicit or implicit) between variables.

Each instance is designed to assess whether an LLM can understand and reason over static semantics in real-world source code.

🛠 Scripts and Usage

For scripts, evaluation tools, and detailed instructions on running inference over CoRe, please check out our companion GitHub repository:

🔗 Website: https://corebench.github.io/

🔗 Source code: https://github.com/CoReBench/CoRe

🔗 Paper: https://arxiv.org/abs/2507.05269

The github repo includes:

  • Raw annotation data that could be used to generate various static analysis tasks
  • Predefined prompts for each task and language
  • Scripts for invoking models and parsing responses
  • Evaluation scripts for dependency classification, trace generation, and dependency source enumeration

📄 License

Apache License 2.0