earthlyframes commited on
Commit
cc47de7
·
verified ·
1 Parent(s): f0a8cce

fix: correct CHROMATIC_TARGETS — canonical source + updated validation (92.3%)

Browse files
Files changed (1) hide show
  1. README.md +14 -9
README.md CHANGED
@@ -38,15 +38,20 @@ The CDM is a companion to the base Refractor ONNX model (a multimodal fusion net
38
 
39
  ```
40
  Index Color CHROMATIC_TARGETS (temporal / spatial / ontological)
41
- 0 Red Past-heavy / Thing-heavy / Known-heavy
42
- 1 Orange Present-heavy / Thing-heavy / Known-heavy
43
- 2 Yellow Present-heavy / Place-heavy / Known-heavy
44
- 3 Green Present-heavy / Place-heavy / Known-heavy <- same targets as Yellow
45
- 4 Blue Future-heavy / Place-heavy / Forgotten-heavy
46
- 5 Indigo Future-heavy / Future-heavy / Forgotten-heavy
47
- 6 Violet Future-heavy / Future-heavy / Imagined-heavy
48
  7 White Uniform across all axes
49
- 8 Black Present-heavy / Thing-heavy / Imagined-heavy
 
 
 
 
 
50
  ```
51
 
52
  ## Validation Results
@@ -103,7 +108,7 @@ python training/validate_mix_scoring.py
103
  ## Limitations
104
 
105
  - CLAP embeddings have a maximum internal window of ~10s; chunked scoring is essential for full-length tracks
106
- - Green and White classification are unreliable (see validation results above)
107
  - Training data is drawn from a single artist's catalog — generalization to other music is untested
108
  - The concept embedding path requires a DeBERTa-v3-base inference pass (~600 MB model)
109
 
 
38
 
39
  ```
40
  Index Color CHROMATIC_TARGETS (temporal / spatial / ontological)
41
+ 0 Red Past / Thing / Known
42
+ 1 Orange Past / Thing / Imagined
43
+ 2 Yellow Future / Place / Imagined
44
+ 3 Green Future / Place / Forgotten
45
+ 4 Blue Present / Person / Forgotten
46
+ 5 Indigo Uniform / Uniform / Known+Forgotten [0.1, 0.4, 0.4]
47
+ 6 Violet Present / Person / Known
48
  7 White Uniform across all axes
49
+ 8 Black Uniform across all axes
50
+
51
+ Targets are derived at runtime from `app/structures/concepts/chromatic_targets.py`,
52
+ which reads directly from the canonical `the_rainbow_table_colors` Pydantic model.
53
+ Previous versions had hand-rolled copies that diverged for 7 of 9 colours; this was
54
+ corrected in April 2026 (fix-chromatic-targets-canonical-source).
55
  ```
56
 
57
  ## Validation Results
 
108
  ## Limitations
109
 
110
  - CLAP embeddings have a maximum internal window of ~10s; chunked scoring is essential for full-length tracks
111
+ - Green classification is the weakest at 75% two songs are near the Yellow/Violet boundary
112
  - Training data is drawn from a single artist's catalog — generalization to other music is untested
113
  - The concept embedding path requires a DeBERTa-v3-base inference pass (~600 MB model)
114