ONLYSLIME's picture
Finalized dataset card with English ranking logic explanation (v2).
2bc5d7a verified
metadata
license: cc-by-sa-4.0
tags:
  - competitive-programming
  - code-ranking
  - llm-benchmark
  - code-efficiency
  - aizu-online-judge

AOJ-CodeRank-Benchmark: Hybrid Efficiency Ranking Benchmark Dataset

1. Overview

This dataset (AOJ-CodeRank-Benchmark) was created to evaluate the capability of Large Language Models (LLMs) in code efficiency ranking tasks using a high-quality, structured benchmark.

The dataset is built entirely on code submission records from Aizu Online Judge (AOJ), strictly adhering to the principle of correctness first, efficiency second.

  • Problem Scope: ALDS1 (Fundamental Algorithms), DSL/GRL/CGL (Advanced Data Structures/Graphs), and Volume 0000-3299 (Classic Contest Problems).
  • Core Feature: Eliminates 0ms submissions and low-quality/non-unique submissions, ensuring true time differentiation across all data groups.

2. Data Structure

The dataset uses the JSON Lines (.jsonl) format. Each line represents a single Task Group object.

Structure Preview (Candidates):

Field Name Type Description
submission_id string Unique Submission ID.
code_snippet string The complete C++ source code.
accuracy float Accuracy Score (0.0 to 1.0).
time_ms integer Actual Execution Time (in milliseconds).
score_of_the_acc float Normalized Efficiency Score (Range -2.0 to 0.0).
final_rank integer Final Competition Rank (1, 2, 3...).

3. Ground Truth (GT) Scoring and Ranking Logic 🏆

The LLM's objective is to predict the final_rank. This ranking is derived from a unique two-tiered system:

Phase I: Efficiency Score (score_of_the_acc)

This score is a purely performance-based metric, calculating the normalized inverse sum of Time and Memory costs within the task group.

extScore=(extNorm_Time+extNorm_Memory) ext{Score} = -( ext{Norm\_Time} + ext{Norm\_Memory})

(Note: Score is between -2.0 and 0.0. A score closer to 0.0 is better.)

Phase II: Final Ranking (final_rank) Mechanism

The final rank is determined by a lexicographical sort (Standard Competition Ranking) using the following priority:

  1. Primary Sort Key (Accuracy): accuracy (Descending).
  2. Secondary Sort Key (Efficiency): score_of_the_acc (Descending).

Tie-Breaking: Submissions with identical Accuracy and Efficiency Score receive the same rank (1-2-2-4 rule).


4. Usage Example

from datasets import load_dataset

# Load the dataset and access the candidates list
dataset = load_dataset("Slime/AOJ-CodeRank-Benchmark", data_files="train.jsonl", split="train")

# The LLM sorting algorithm will receive task['candidates'] for ranking
for task in dataset:
    candidates = task['candidates']
    # Algorithm generates predicted_rank for candidates
    # Evaluation compares predicted_rank against ground_truth['final_rank']

5. Acknowledgments

Original submission records and problem context are sourced from Aizu Online Judge (AOJ).