Datasets:
metadata
language:
- code
license: mit
task_categories:
- graph-ml
- text-classification
tags:
- code
- ast
- gnn
- graph-neural-network
- ruby
- complexity-prediction
- code-generation
- negative-results
size_categories:
- 10K<n<100K
GNN Ruby Code Study
Systematic study of Graph Neural Network architectures for Ruby code complexity prediction and generation.
Dataset
22,452 Ruby methods parsed into AST graphs with 74-dimensional node features.
| Split | Samples | File |
|---|---|---|
| Train | 19,084 | dataset/train.jsonl |
| Validation | 3,368 | dataset/val.jsonl |
Each JSONL record contains:
repo_name: Source repositoryfile_path: Original file pathraw_source: Raw Ruby source codecomplexity_score: McCabe cyclomatic complexityast_json: Full AST as nested JSON (node types + literal values)id: Unique identifier
Node Features (74D)
- One-hot encoding of 73 AST node types (def, send, args, lvar, str, ...) + 1 unknown
- Types cover Ruby AST nodes; literal values (identifiers, strings, numbers) map to unknown
Key Findings
- 5-layer GraphSAGE achieves MAE 4.018 (R² = 0.709) for complexity prediction — 16% better than 3-layer baseline (9.9σ significant)
- GNN autoencoders produce 0% valid Ruby across all 15+ tested configurations
- The literal value bottleneck: Teacher-forced GIN achieves 81% node type accuracy and 99.5% type diversity, but 0% syntax validity because 47% of AST elements are literals with no learnable representation
- Chain decoders collapse: 93% of predictions default to UNKNOWN without structural supervision
- Total cost: ~$4.32 across 51 GPU experiments on Vast.ai RTX 4090 + local RTX 2070 SUPER
Repository Structure
├── paper.md # Full research paper
├── dataset/
│ ├── train.jsonl # 19,084 Ruby methods (37 MB)
│ └── val.jsonl # 3,368 Ruby methods (6.5 MB)
├── models/
│ ├── encoder_sage_5layer.pt # Pre-trained SAGE encoder
│ └── decoders/ # Trained decoder checkpoints
│ ├── tf-gin-256-deep.pt # Best: teacher-forced GIN, 5 layers
│ ├── tf-gin-{128,256,512}.pt # Dimension ablation
│ └── chain-gin-256.pt # Control (no structural supervision)
├── results/
│ ├── fleet_experiments.json # All Vast.ai experiment metrics
│ ├── autonomous_research.json # 18 baseline variance replicates
│ └── gin_deep_dive/ # Local deep-dive analysis
│ ├── summary.json # Ablation summary table
│ └── *_results.json # Per-config detailed results
├── experiments/ # Ratiocinator fleet YAML specs
├── specs/ # Ratiocinator research YAML specs
├── src/ # Model source code
│ ├── models.py # GNN architectures
│ ├── data_processing.py # AST→graph pipeline
│ ├── loss.py # Loss functions
│ ├── train.py # Complexity prediction trainer
│ └── train_autoencoder.py # Autoencoder trainer
└── scripts/ # Runner and evaluation scripts
Reproducing Results
# Clone the experiment branch
git clone -b experiment/ratiocinator-gnn-study https://github.com/timlawrenz/jubilant-palm-tree
cd jubilant-palm-tree
# Install dependencies
python -m venv .venv && source .venv/bin/activate
pip install torch torchvision torch_geometric
# Train complexity prediction (Track 1)
python train.py --conv_type SAGE --num_layers 5 --epochs 50
# Train autoencoder with teacher-forced GIN decoder (Track 4)
python train_autoencoder.py --decoder_conv_type GIN --decoder_edge_mode teacher_forced --epochs 30
# Run the full deep-dive ablation
python scripts/gin_deep_dive.py
Source Code
- Model code: jubilant-palm-tree (branch:
experiment/ratiocinator-gnn-study) - Orchestrator: ratiocinator
Citation
If you use this dataset or findings, please cite:
@misc{lawrenz2025gnnruby,
title={Graph Neural Networks for Ruby Code Complexity Prediction and Generation: A Systematic Architecture Study},
author={Tim Lawrenz},
year={2025},
howpublished={\url{https://huggingface.co/datasets/timlawrenz/gnn-ruby-code-study}}
}