Datasets:
license: other
license_name: collaborative-intelligence-license
license_link: >-
https://github.com/brotherclone/white/blob/main/COLLABORATIVE_INTELLIGENCE_LICENSE.md
language:
- en
tags:
- music
- multimodal
- audio
- midi
- chromatic-taxonomy
- rebracketing
- evolutionary-composition
size_categories:
- 10K<n<100K
White Training Data
Training data and models for the Rainbow Table chromatic fitness function — a multimodal ML model that scores how well audio, MIDI, and text align with a target chromatic mode (Black, Red, Orange, Yellow, Green, Blue, Indigo, Violet).
Part of The Earthly Frames project, a conscious collaboration between human creativity and AI.
Purpose
These models are fitness functions for evolutionary music composition, not classifiers in isolation. The production pipeline works like this:
- A concept agent generates a textual concept
- A music production agent generates 50 chord progression variations
- The chromatic fitness model scores each for consistency with the target color
- Top candidates advance through drums, bass, melody stages
- Final candidates go to human evaluation
Version
Current: v0.3.0 — 2026-02-13
Dataset Structure
| Split | Rows | Description |
|---|---|---|
base_manifest |
1,327 | Track-level metadata: song info, concepts, musical keys, chromatic labels, training targets |
training_segments |
11,605 | Time-aligned segments with lyric text, structure sections, audio/MIDI coverage flags |
training_full |
11,605 | Segments joined with manifest metadata — the primary training table |
Playable Audio Preview
| Split | Rows | Description |
|---|---|---|
preview |
~160 | Playable audio preview — 20 segments per chromatic color with inline audio playback |
Try it: Load the preview config to hear what each chromatic color sounds like:
from datasets import load_dataset
# Load playable preview
preview = load_dataset("earthlyframes/white-training-data", "preview")
# Listen to a GREEN segment
green_segment = preview.filter(lambda x: x['rainbow_color'] == 'Green')[0]
print(green_segment['concept'])
# Audio plays inline in Jupyter/Colab, or access via green_segment['audio']
Coverage by Chromatic Color
| Color | Segments | Audio | MIDI | Text |
|---|---|---|---|---|
| Black | 1,748 | 83.0% | 58.5% | 100.0% |
| Red | 1,474 | 93.7% | 48.6% | 90.7% |
| Orange | 1,731 | 83.8% | 51.1% | 100.0% |
| Yellow | 656 | 88.0% | 52.9% | 52.6% |
| Green | 393 | 90.1% | 69.5% | 0.0% |
| Violet | 2,100 | 75.9% | 55.6% | 100.0% |
| Indigo | 1,406 | 77.2% | 34.1% | 100.0% |
| Blue | 2,097 | 96.0% | 12.1% | 100.0% |
Note: Audio waveforms and MIDI binaries are stored separately (not included in metadata configs due to size). The preview config includes playable audio for exploration. The media parquet (~15 GB) is used locally during training.
Trained Models
| File | Size | Description |
|---|---|---|
data/models/fusion_model.pt |
~16 MB | PyTorch checkpoint — MultimodalFusionModel (4.3M params) |
data/models/fusion_model.onnx |
~16 MB | ONNX export for fast CPU inference |
The models are consumed via the ChromaticScorer class, which wraps encoding and inference:
from chromatic_scorer import ChromaticScorer
scorer = ChromaticScorer("path/to/fusion_model.onnx")
result = scorer.score(midi_bytes=midi, audio_waveform=audio, concept_text="a haunted lullaby")
# result: {"temporal": 0.87, "spatial": 0.91, "ontological": 0.83, "confidence": 0.89}
# Batch scoring for evolutionary candidate selection
ranked = scorer.score_batch(candidates, target_color="Violet")
Architecture: PianoRollEncoder CNN (1.1M params, unfrozen) + fusion MLP (3.2M params) with 4 regression heads. Input: audio (512-dim CLAP) + MIDI (512-dim piano roll) + concept (768-dim DeBERTa) + lyric (768-dim DeBERTa) = 2560-dim fused representation. Trained with learned null embeddings and modality dropout (p=0.15) for robustness to missing modalities.
Key Features
training_full (primary training table)
rainbow_color— Target chromatic label (Black/Red/Orange/Yellow/Green/Blue/Indigo/Violet)rainbow_color_temporal_mode/rainbow_color_ontological_mode— Regression targets for mode dimensionsconcept— Textual concept describing the song's narrativelyric_text— Segment-level lyrics (when available)bpm,key_signature_note,key_signature_mode— Musical metadatatraining_data— Struct with computed features: rebracketing type/intensity, narrative complexity, boundary fluidity, etc.has_audio/has_midi— Modality availability flagsstart_seconds/end_seconds— Segment time boundaries
preview (playable audio)
Same metadata fields as training_full, plus:
audio— Audio feature with inline playback support (FLAC encoded, 44.1kHz)duration_seconds— Segment duration
Usage
from datasets import load_dataset
# Load the primary training table (segments + manifest metadata)
training = load_dataset("earthlyframes/white-training-data", "training_full")
# Load playable audio preview
preview = load_dataset("earthlyframes/white-training-data", "preview")
# Load just the base manifest (track-level)
manifest = load_dataset("earthlyframes/white-training-data", "base_manifest")
# Load raw segments (no manifest join)
segments = load_dataset("earthlyframes/white-training-data", "training_segments")
# Load a specific version
training = load_dataset("earthlyframes/white-training-data", "training_full", revision="v0.3.0")
Training Results
Text-Only (Phases 1-4)
| Task | Metric | Result |
|---|---|---|
| Binary classification (has rebracketing) | Accuracy | 100% |
| Multi-class classification (rebracketing type) | Accuracy | 100% |
| Temporal mode regression | Mode accuracy | 94.9% |
| Ontological mode regression | Mode accuracy | 92.9% |
| Spatial mode regression | Mode accuracy | 61.6% |
Multimodal Fusion (Phase 3)
| Dimension | Text-Only | Multimodal | Improvement |
|---|---|---|---|
| Temporal | 94.9% | 90.0% | — |
| Ontological | 92.9% | 91.0% | — |
| Spatial | 61.6% | 93.0% | +31.4% |
Spatial mode was bottlenecked by instrumental albums (Yellow, Green) which lack text. The multimodal fusion model resolves this by incorporating CLAP audio embeddings and piano roll MIDI features, enabling accurate scoring even without lyrics. Temporal and ontological show slight regression in multi-task mode but remain strong; single-task variants can be used where maximum per-dimension accuracy is needed.
Source
83 songs across 8 chromatic albums. The 7 color albums (Black through Violet) are human-composed source material spanning 10+ years of original work — all audio, lyrics, and arrangements are the product of human creativity. The White album is being co-produced with AI using the evolutionary composition pipeline described above. No sampled or licensed material is used in any album.
License
Collaborative Intelligence License v1.0 — This work represents conscious partnership between human creativity and AI. Both parties have agency; both must consent to sharing.
Generated 2026-02-13 | GitHub