Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# GNN Ruby Code Study
|
| 2 |
+
|
| 3 |
+
Systematic study of Graph Neural Network architectures for Ruby code complexity prediction and generation.
|
| 4 |
+
|
| 5 |
+
## Key Findings
|
| 6 |
+
|
| 7 |
+
1. **5-layer GraphSAGE** achieves MAE 4.018 (R^2 = 0.709) for complexity prediction - 16% better than 3-layer baseline
|
| 8 |
+
2. **GNN autoencoders produce 0% valid Ruby** across all tested architectures and loss functions
|
| 9 |
+
3. **Teacher-forced GIN decoder** achieves 7% validity - the only non-zero result in 35 experiments
|
| 10 |
+
4. **Total cost: ~$4.20** on Vast.ai RTX 4090 instances
|
| 11 |
+
|
| 12 |
+
## Contents
|
| 13 |
+
|
| 14 |
+
- `paper.md` - Full research paper
|
| 15 |
+
- `results/experiments.json` - All fleet experiment metrics
|
| 16 |
+
- `results/autonomous_research.json` - 18 baseline variance replicates
|
| 17 |
+
- `experiments/` - Fleet YAML specifications for all 3 tracks
|
| 18 |
+
- `specs/` - Ratiocinator ResearchSpec YAMLs
|
| 19 |
+
|
| 20 |
+
## Source Code
|
| 21 |
+
|
| 22 |
+
- Model code: [jubilant-palm-tree](https://github.com/timlawrenz/jubilant-palm-tree) (branch: `experiment/ratiocinator-gnn-study`)
|
| 23 |
+
- Orchestrator: [ratiocinator](https://github.com/timlawrenz/ratiocinator)
|