Datasets:
license: cc-by-4.0
task_categories:
- text-generation
- text-classification
- reinforcement-learning
language:
- en
- code
tags:
- synthetic
- coding-agent
- mcts
- reasoning-traces
- process-reward-model
- rlhf
- dpo
- agentic-ai
- tool-use
- code-generation
- llm-training
- ucb
- reward-modeling
pretty_name: Coding Agent MCTS Reasoning Trace Pack
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: coding_intel_sample.parquet
Coding Agent MCTS Reasoning Trace Pack (Sample)
A synthetic Monte Carlo Tree Search reasoning-trace dataset for autonomous coding agents. Each row is a complete reasoning lifecycle — initial context analysis → draft exploration → test feedback → prune-or-anchor → final outcome — labeled with a reasoning phenotype (TEST_DRIVEN, HACKER, DEEP_THINK, SECURITY_FIRST, REFACTOR_HEAVY) and carrying UCB scores at every step and explicit rewards at terminal actions.
Built by SolsticeAI as a free sample of a larger commercial pack. 100% synthetic. No real code, no proprietary repos — task titles and descriptions are generic archetypes drawn from canonical library patterns.
What is included
| File | Rows | Format | Purpose |
|---|---|---|---|
coding_intel_sample.parquet |
10,000 | Parquet | Columnar, typed, best for analytics and RL training |
coding_intel_sample.jsonl |
10,000 | JSON Lines | Streaming / LLM training friendly |
Source pack: 2.5M-trace corpus
This sample: 10,000 reasoning traces, stratified 2,000 per reasoning phenotype
Reasoning phenotypes (5): TEST_DRIVEN, HACKER, DEEP_THINK, SECURITY_FIRST, REFACTOR_HEAVY
Task types (3): bugfix, feature, refactor (3,300 each)2,500 each)
Languages (4): python, rust, go, typescript (
Production impact tiers: LOW, MEDIUM, HIGH, CRITICAL (~2,500 each)
Record structure
Each record is one reasoning lifecycle with 7 top-level fields:
| Field | Type | Contents |
|---|---|---|
schema_version |
string | Pack schema version (1.0.0-coding-intel-sample) |
event |
struct | task_id, task_type, language, title, description |
risk_context |
struct | test_coverage_baseline, cyclomatic_complexity, production_impact |
agent_reasoning |
list | Ordered reasoning steps: action (analyze_context, write_draft, run_tests, lethe_prune, prometheus_anchor), depth, ucb_score (null at root / terminal), reward (populated on terminal actions only), thought (natural-language rationale) |
correlated_telemetry |
struct | linter_warnings_initial, linter_warnings_final, test_runtime_ms, ci_status |
execution_summary |
struct | files_changed, lines_added, lines_removed, time_to_resolution_sec |
genetic_optimizer_feedback |
struct | final_reward, lethe_prunes_triggered, nodes_expanded, phenotype_used |
See SCHEMA.md for the full nested field breakdown.
Why this dataset is useful
Most public coding datasets (HumanEval, SWE-bench, MBPP) only give you the final answer and the task description. They don't capture the reasoning tree the agent walked through — the wrong paths, the prunes, the anchor points. This pack is shaped around what modern agent-training pipelines actually need:
- Explicit exploration vs exploitation. Traces include both successful and pruned branches —
lethe_pruneevents with negative reward,prometheus_anchorevents with positive reward. Roughly 30% of traces carry a failed exploration branch before reaching the golden timeline. - Reward signals embedded at every step. UCB scores at each non-terminal step, explicit rewards at terminal actions — directly usable for RL, DPO, and process-reward-model training.
- Phenotype labels on every trace. Train a
SECURITY_FIRSTcoder specifically; run phenotype-transfer studies; build strategy-aware evaluation harnesses. - Correlated telemetry. Linter-warning deltas, test runtime, and CI status correlated to reasoning outcome — grounds the trace in observable signals.
- Compact. Parquet fits in 340 KB, JSONL in 12.5 MB — you can pull this into a notebook in seconds and iterate.
Typical use cases
- MCTS-based coding agent architecture training
- Process reward model (PRM) training
- Reasoning-chain evaluation benchmarks
- Agent self-improvement via trace replay
- Strategy-conditional code-generation research
- Curriculum learning with task-difficulty ladders
- LLM fine-tuning on structured reasoning narratives
- Benchmarking UCB-based exploration policies
Quick start
import pandas as pd
import pyarrow.parquet as pq
df = pq.read_table("coding_intel_sample.parquet").to_pandas()
# Phenotype distribution (stratified balanced)
print(df["genetic_optimizer_feedback"].apply(lambda g: g["phenotype_used"]).value_counts())
# Average final reward by phenotype
df["pheno"] = df["genetic_optimizer_feedback"].apply(lambda g: g["phenotype_used"])
df["reward"] = df["genetic_optimizer_feedback"].apply(lambda g: g["final_reward"])
print(df.groupby("pheno")["reward"].mean().round(2))
# Prune rate by task type
df["task"] = df["event"].apply(lambda e: e["task_type"])
df["prunes"] = df["genetic_optimizer_feedback"].apply(lambda g: g["lethe_prunes_triggered"])
print(df.groupby("task")["prunes"].mean().round(2))
# Pull one full reasoning chain
row = df.iloc[0]
for step in row["agent_reasoning"]:
print(f" d={step['depth']:<2} {step['action']:<20} ucb={step['ucb_score']} reward={step['reward']}: {step['thought']}")
Streaming form:
import json
with open("coding_intel_sample.jsonl") as f:
for line in f:
trace = json.loads(line)
# one MCTS reasoning trace per line
Notes and limitations
- Reasoning traces use canned action templates rather than live-executed code. This pack is designed for agent-architecture training, not end-to-end SWE-bench-style evaluation.
ci_statusisSUCCESSfor every row in this sample — the production pack includesFAILURE/FLAKY/TIMEOUTvariants; this free sample is restricted to golden-timeline anchored traces to keep a clean reward surface.- UCB scores at root nodes use positive infinity (serialized as
"Infinity"in JSONL), following the standard MCTS convention. - Phenotype distribution is uniform; production licensing supports custom phenotype mixes.
Responsible use
This dataset is intended for agent-training, process-reward-model, and MCTS research. It contains synthesized reasoning narratives and action templates — it does not contain real code, real commit history, or proprietary repository content. Models trained on this data will learn reasoning structure and phenotype-conditional behavior; downstream code-generation quality still depends on training with real-code supervision from appropriately licensed corpora.
License
Released under CC BY 4.0. Use freely for research, agent prototyping, education, and commercial development with attribution.
Get the full pack
This Hugging Face repo is a 10K-trace sample. The production pack scales to 2.5M+ traces with wider CI-outcome distribution (FAILURE / FLAKY / TIMEOUT), additional languages (C++, Java, Kotlin, Swift, C#), AST-diff variants, tool-call graph traces, multi-turn user-interaction sequences, custom phenotype mixes, and buyer-specific variants.
Self-serve (Stripe checkout):
- Sample Scale tier — $5,000 — ~25K records, one subject, 72-hour delivery.
Full pack + enterprise scope:
- www.solsticestudio.ai/datasets — per-SKU pricing across Starter / Professional / Enterprise tiers, plus commercial licensing, custom generation, and buyer-specific variants.
Procurement catalog:
- SolsticeAI Data Storefront — available via Datarade / Monda.
Citation
@dataset{solstice_coding_intel_pack_2026,
title = {Coding Agent MCTS Reasoning Trace Pack (Sample)},
author = {SolsticeAI},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/solsticestudioai/coding-intel-pack}
}