fix: correct CHROMATIC_TARGETS — canonical source + updated validation (92.3%)
Browse files
README.md
CHANGED
|
@@ -38,15 +38,20 @@ The CDM is a companion to the base Refractor ONNX model (a multimodal fusion net
|
|
| 38 |
|
| 39 |
```
|
| 40 |
Index Color CHROMATIC_TARGETS (temporal / spatial / ontological)
|
| 41 |
-
0 Red Past
|
| 42 |
-
1 Orange
|
| 43 |
-
2 Yellow
|
| 44 |
-
3 Green
|
| 45 |
-
4 Blue
|
| 46 |
-
5 Indigo
|
| 47 |
-
6 Violet
|
| 48 |
7 White Uniform across all axes
|
| 49 |
-
8 Black
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
```
|
| 51 |
|
| 52 |
## Validation Results
|
|
@@ -103,7 +108,7 @@ python training/validate_mix_scoring.py
|
|
| 103 |
## Limitations
|
| 104 |
|
| 105 |
- CLAP embeddings have a maximum internal window of ~10s; chunked scoring is essential for full-length tracks
|
| 106 |
-
- Green
|
| 107 |
- Training data is drawn from a single artist's catalog — generalization to other music is untested
|
| 108 |
- The concept embedding path requires a DeBERTa-v3-base inference pass (~600 MB model)
|
| 109 |
|
|
|
|
| 38 |
|
| 39 |
```
|
| 40 |
Index Color CHROMATIC_TARGETS (temporal / spatial / ontological)
|
| 41 |
+
0 Red Past / Thing / Known
|
| 42 |
+
1 Orange Past / Thing / Imagined
|
| 43 |
+
2 Yellow Future / Place / Imagined
|
| 44 |
+
3 Green Future / Place / Forgotten
|
| 45 |
+
4 Blue Present / Person / Forgotten
|
| 46 |
+
5 Indigo Uniform / Uniform / Known+Forgotten [0.1, 0.4, 0.4]
|
| 47 |
+
6 Violet Present / Person / Known
|
| 48 |
7 White Uniform across all axes
|
| 49 |
+
8 Black Uniform across all axes
|
| 50 |
+
|
| 51 |
+
Targets are derived at runtime from `app/structures/concepts/chromatic_targets.py`,
|
| 52 |
+
which reads directly from the canonical `the_rainbow_table_colors` Pydantic model.
|
| 53 |
+
Previous versions had hand-rolled copies that diverged for 7 of 9 colours; this was
|
| 54 |
+
corrected in April 2026 (fix-chromatic-targets-canonical-source).
|
| 55 |
```
|
| 56 |
|
| 57 |
## Validation Results
|
|
|
|
| 108 |
## Limitations
|
| 109 |
|
| 110 |
- CLAP embeddings have a maximum internal window of ~10s; chunked scoring is essential for full-length tracks
|
| 111 |
+
- Green classification is the weakest at 75% — two songs are near the Yellow/Violet boundary
|
| 112 |
- Training data is drawn from a single artist's catalog — generalization to other music is untested
|
| 113 |
- The concept embedding path requires a DeBERTa-v3-base inference pass (~600 MB model)
|
| 114 |
|