drscotthawley commited on
Commit
78b535c
1 Parent(s): 37c6212

added chord stuff

Browse files
Files changed (5) hide show
  1. pom/all_chords.txt +529 -0
  2. pom/chord_names.txt +1 -0
  3. pom/chord_types.txt +0 -0
  4. pom/chords.py +438 -0
  5. pom/chords.txt +528 -0
pom/all_chords.txt ADDED
@@ -0,0 +1,529 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A:11
2
+ A:13
3
+ A:7
4
+ A:7/3
5
+ A:7/5
6
+ A:7(#9)
7
+ A:7/b7
8
+ A:9
9
+ A:aug
10
+ Ab:11
11
+ Ab:13
12
+ Ab:7
13
+ Ab:7/3
14
+ Ab:7/5
15
+ Ab:7(#9)
16
+ Ab:7/b7
17
+ Ab:9
18
+ Ab:aug
19
+ Ab:dim
20
+ Ab:dim7
21
+ Ab:hdim7
22
+ Ab:maj
23
+ Ab:maj(11)
24
+ Ab:maj13
25
+ Ab:maj/3
26
+ Ab:maj/5
27
+ Ab:maj6
28
+ Ab:maj6(9)
29
+ Ab:maj7
30
+ Ab:maj7/3
31
+ Ab:maj7/5
32
+ Ab:maj7/7
33
+ Ab:maj(9)
34
+ Ab:maj9
35
+ Ab:maj9(11)
36
+ Ab:min
37
+ Ab:min(11)
38
+ Ab:min11
39
+ Ab:min13
40
+ Ab:min/5
41
+ Ab:min6
42
+ Ab:min6(9)
43
+ Ab:min7
44
+ Ab:min7/5
45
+ Ab:min7/b7
46
+ Ab:min(9)
47
+ Ab:min9
48
+ Ab:min/b3
49
+ Ab:minmaj7
50
+ Ab:sus2
51
+ Ab:sus4
52
+ Ab:sus4(b7)
53
+ Ab:sus4(b7,9)
54
+ A:dim
55
+ A:dim7
56
+ A:hdim7
57
+ A:maj
58
+ A:maj(11)
59
+ A:maj13
60
+ A:maj/3
61
+ A:maj/5
62
+ A:maj6
63
+ A:maj6(9)
64
+ A:maj7
65
+ A:maj7/3
66
+ A:maj7/5
67
+ A:maj7/7
68
+ A:maj(9)
69
+ A:maj9
70
+ A:maj9(11)
71
+ A:min
72
+ A:min(11)
73
+ A:min11
74
+ A:min13
75
+ A:min/5
76
+ A:min6
77
+ A:min6(9)
78
+ A:min7
79
+ A:min7/5
80
+ A:min7/b7
81
+ A:min(9)
82
+ A:min9
83
+ A:min/b3
84
+ A:minmaj7
85
+ A:sus2
86
+ A:sus4
87
+ A:sus4(b7)
88
+ A:sus4(b7,9)
89
+ B:11
90
+ B:13
91
+ B:7
92
+ B:7/3
93
+ B:7/5
94
+ B:7(#9)
95
+ B:7/b7
96
+ B:9
97
+ B:aug
98
+ Bb:11
99
+ Bb:13
100
+ Bb:7
101
+ Bb:7/3
102
+ Bb:7/5
103
+ Bb:7(#9)
104
+ Bb:7/b7
105
+ Bb:9
106
+ Bb:aug
107
+ Bb:dim
108
+ Bb:dim7
109
+ Bb:hdim7
110
+ Bb:maj
111
+ Bb:maj(11)
112
+ Bb:maj13
113
+ Bb:maj/3
114
+ Bb:maj/5
115
+ Bb:maj6
116
+ Bb:maj6(9)
117
+ Bb:maj7
118
+ Bb:maj7/3
119
+ Bb:maj7/5
120
+ Bb:maj7/7
121
+ Bb:maj(9)
122
+ Bb:maj9
123
+ Bb:maj9(11)
124
+ Bb:min
125
+ Bb:min(11)
126
+ Bb:min11
127
+ Bb:min13
128
+ Bb:min/5
129
+ Bb:min6
130
+ Bb:min6(9)
131
+ Bb:min7
132
+ Bb:min7/5
133
+ Bb:min7/b7
134
+ Bb:min(9)
135
+ Bb:min9
136
+ Bb:min/b3
137
+ Bb:minmaj7
138
+ Bb:sus2
139
+ Bb:sus4
140
+ Bb:sus4(b7)
141
+ Bb:sus4(b7,9)
142
+ B:dim
143
+ B:dim7
144
+ B:hdim7
145
+ B:maj
146
+ B:maj(11)
147
+ B:maj13
148
+ B:maj/3
149
+ B:maj/5
150
+ B:maj6
151
+ B:maj6(9)
152
+ B:maj7
153
+ B:maj7/3
154
+ B:maj7/5
155
+ B:maj7/7
156
+ B:maj(9)
157
+ B:maj9
158
+ B:maj9(11)
159
+ B:min
160
+ B:min(11)
161
+ B:min11
162
+ B:min13
163
+ B:min/5
164
+ B:min6
165
+ B:min6(9)
166
+ B:min7
167
+ B:min7/5
168
+ B:min7/b7
169
+ B:min(9)
170
+ B:min9
171
+ B:min/b3
172
+ B:minmaj7
173
+ B:sus2
174
+ B:sus4
175
+ B:sus4(b7)
176
+ B:sus4(b7,9)
177
+ C#:11
178
+ C:11
179
+ C#:13
180
+ C:13
181
+ C#:7
182
+ C:7
183
+ C#:7/3
184
+ C:7/3
185
+ C#:7/5
186
+ C:7/5
187
+ C#:7(#9)
188
+ C:7(#9)
189
+ C#:7/b7
190
+ C:7/b7
191
+ C#:9
192
+ C:9
193
+ C#:aug
194
+ C:aug
195
+ C#:dim
196
+ C:dim
197
+ C#:dim7
198
+ C:dim7
199
+ C#:hdim7
200
+ C:hdim7
201
+ C#:maj
202
+ C:maj
203
+ C#:maj(11)
204
+ C:maj(11)
205
+ C#:maj13
206
+ C:maj13
207
+ C#:maj/3
208
+ C:maj/3
209
+ C#:maj/5
210
+ C:maj/5
211
+ C#:maj6
212
+ C:maj6
213
+ C#:maj6(9)
214
+ C:maj6(9)
215
+ C#:maj7
216
+ C:maj7
217
+ C#:maj7/3
218
+ C:maj7/3
219
+ C#:maj7/5
220
+ C:maj7/5
221
+ C#:maj7/7
222
+ C:maj7/7
223
+ C#:maj(9)
224
+ C#:maj9
225
+ C:maj(9)
226
+ C:maj9
227
+ C#:maj9(11)
228
+ C:maj9(11)
229
+ C#:min
230
+ C:min
231
+ C#:min(11)
232
+ C#:min11
233
+ C:min(11)
234
+ C:min11
235
+ C#:min13
236
+ C:min13
237
+ C#:min/5
238
+ C:min/5
239
+ C#:min6
240
+ C:min6
241
+ C#:min6(9)
242
+ C:min6(9)
243
+ C#:min7
244
+ C:min7
245
+ C#:min7/5
246
+ C:min7/5
247
+ C#:min7/b7
248
+ C:min7/b7
249
+ C#:min(9)
250
+ C#:min9
251
+ C:min(9)
252
+ C:min9
253
+ C#:min/b3
254
+ C:min/b3
255
+ C#:minmaj7
256
+ C:minmaj7
257
+ C#:sus2
258
+ C:sus2
259
+ C#:sus4
260
+ C:sus4
261
+ C#:sus4(b7)
262
+ C:sus4(b7)
263
+ C#:sus4(b7,9)
264
+ C:sus4(b7,9)
265
+ D:11
266
+ D:13
267
+ D:7
268
+ D:7/3
269
+ D:7/5
270
+ D:7(#9)
271
+ D:7/b7
272
+ D:9
273
+ D:aug
274
+ D:dim
275
+ D:dim7
276
+ D:hdim7
277
+ D:maj
278
+ D:maj(11)
279
+ D:maj13
280
+ D:maj/3
281
+ D:maj/5
282
+ D:maj6
283
+ D:maj6(9)
284
+ D:maj7
285
+ D:maj7/3
286
+ D:maj7/5
287
+ D:maj7/7
288
+ D:maj(9)
289
+ D:maj9
290
+ D:maj9(11)
291
+ D:min
292
+ D:min(11)
293
+ D:min11
294
+ D:min13
295
+ D:min/5
296
+ D:min6
297
+ D:min6(9)
298
+ D:min7
299
+ D:min7/5
300
+ D:min7/b7
301
+ D:min(9)
302
+ D:min9
303
+ D:min/b3
304
+ D:minmaj7
305
+ D:sus2
306
+ D:sus4
307
+ D:sus4(b7)
308
+ D:sus4(b7,9)
309
+ E:11
310
+ E:13
311
+ E:7
312
+ E:7/3
313
+ E:7/5
314
+ E:7(#9)
315
+ E:7/b7
316
+ E:9
317
+ E:aug
318
+ Eb:11
319
+ Eb:13
320
+ Eb:7
321
+ Eb:7/3
322
+ Eb:7/5
323
+ Eb:7(#9)
324
+ Eb:7/b7
325
+ Eb:9
326
+ Eb:aug
327
+ Eb:dim
328
+ Eb:dim7
329
+ Eb:hdim7
330
+ Eb:maj
331
+ Eb:maj(11)
332
+ Eb:maj13
333
+ Eb:maj/3
334
+ Eb:maj/5
335
+ Eb:maj6
336
+ Eb:maj6(9)
337
+ Eb:maj7
338
+ Eb:maj7/3
339
+ Eb:maj7/5
340
+ Eb:maj7/7
341
+ Eb:maj(9)
342
+ Eb:maj9
343
+ Eb:maj9(11)
344
+ Eb:min
345
+ Eb:min(11)
346
+ Eb:min11
347
+ Eb:min13
348
+ Eb:min/5
349
+ Eb:min6
350
+ Eb:min6(9)
351
+ Eb:min7
352
+ Eb:min7/5
353
+ Eb:min7/b7
354
+ Eb:min(9)
355
+ Eb:min9
356
+ Eb:min/b3
357
+ Eb:minmaj7
358
+ Eb:sus2
359
+ Eb:sus4
360
+ Eb:sus4(b7)
361
+ Eb:sus4(b7,9)
362
+ E:dim
363
+ E:dim7
364
+ E:hdim7
365
+ E:maj
366
+ E:maj(11)
367
+ E:maj13
368
+ E:maj/3
369
+ E:maj/5
370
+ E:maj6
371
+ E:maj6(9)
372
+ E:maj7
373
+ E:maj7/3
374
+ E:maj7/5
375
+ E:maj7/7
376
+ E:maj(9)
377
+ E:maj9
378
+ E:maj9(11)
379
+ E:min
380
+ E:min(11)
381
+ E:min11
382
+ E:min13
383
+ E:min/5
384
+ E:min6
385
+ E:min6(9)
386
+ E:min7
387
+ E:min7/5
388
+ E:min7/b7
389
+ E:min(9)
390
+ E:min9
391
+ E:min/b3
392
+ E:minmaj7
393
+ E:sus2
394
+ E:sus4
395
+ E:sus4(b7)
396
+ E:sus4(b7,9)
397
+ F#:11
398
+ F:11
399
+ F#:13
400
+ F:13
401
+ F#:7
402
+ F:7
403
+ F#:7/3
404
+ F:7/3
405
+ F#:7/5
406
+ F:7/5
407
+ F#:7(#9)
408
+ F:7(#9)
409
+ F#:7/b7
410
+ F:7/b7
411
+ F#:9
412
+ F:9
413
+ F#:aug
414
+ F:aug
415
+ F#:dim
416
+ F:dim
417
+ F#:dim7
418
+ F:dim7
419
+ F#:hdim7
420
+ F:hdim7
421
+ F#:maj
422
+ F:maj
423
+ F#:maj(11)
424
+ F:maj(11)
425
+ F#:maj13
426
+ F:maj13
427
+ F#:maj/3
428
+ F:maj/3
429
+ F#:maj/5
430
+ F:maj/5
431
+ F#:maj6
432
+ F:maj6
433
+ F#:maj6(9)
434
+ F:maj6(9)
435
+ F#:maj7
436
+ F:maj7
437
+ F#:maj7/3
438
+ F:maj7/3
439
+ F#:maj7/5
440
+ F:maj7/5
441
+ F#:maj7/7
442
+ F:maj7/7
443
+ F#:maj(9)
444
+ F#:maj9
445
+ F:maj(9)
446
+ F:maj9
447
+ F#:maj9(11)
448
+ F:maj9(11)
449
+ F#:min
450
+ F:min
451
+ F#:min(11)
452
+ F#:min11
453
+ F:min(11)
454
+ F:min11
455
+ F#:min13
456
+ F:min13
457
+ F#:min/5
458
+ F:min/5
459
+ F#:min6
460
+ F:min6
461
+ F#:min6(9)
462
+ F:min6(9)
463
+ F#:min7
464
+ F:min7
465
+ F#:min7/5
466
+ F:min7/5
467
+ F#:min7/b7
468
+ F:min7/b7
469
+ F#:min(9)
470
+ F#:min9
471
+ F:min(9)
472
+ F:min9
473
+ F#:min/b3
474
+ F:min/b3
475
+ F#:minmaj7
476
+ F:minmaj7
477
+ F#:sus2
478
+ F:sus2
479
+ F#:sus4
480
+ F:sus4
481
+ F#:sus4(b7)
482
+ F:sus4(b7)
483
+ F#:sus4(b7,9)
484
+ F:sus4(b7,9)
485
+ G:11
486
+ G:13
487
+ G:7
488
+ G:7/3
489
+ G:7/5
490
+ G:7(#9)
491
+ G:7/b7
492
+ G:9
493
+ G:aug
494
+ G:dim
495
+ G:dim7
496
+ G:hdim7
497
+ G:maj
498
+ G:maj(11)
499
+ G:maj13
500
+ G:maj/3
501
+ G:maj/5
502
+ G:maj6
503
+ G:maj6(9)
504
+ G:maj7
505
+ G:maj7/3
506
+ G:maj7/5
507
+ G:maj7/7
508
+ G:maj(9)
509
+ G:maj9
510
+ G:maj9(11)
511
+ G:min
512
+ G:min(11)
513
+ G:min11
514
+ G:min13
515
+ G:min/5
516
+ G:min6
517
+ G:min6(9)
518
+ G:min7
519
+ G:min7/5
520
+ G:min7/b7
521
+ G:min(9)
522
+ G:min9
523
+ G:min/b3
524
+ G:minmaj7
525
+ G:sus2
526
+ G:sus4
527
+ G:sus4(b7)
528
+ G:sus4(b7,9)
529
+ N
pom/chord_names.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ ['aug', 'dim', 'dim7', 'hdim7', 'maj', 'maj(11)', 'maj13', 'maj/3', 'maj/5', 'maj6', 'maj6(9)', 'maj7', 'maj7/3', 'maj7/5', 'maj7/7', 'maj(9)', 'maj9', 'maj9(11)', 'min', 'min(11)', 'min11', 'min13', 'min/5', 'min6', 'min6(9)', 'min7', 'min7/5', 'min7/b7', 'min(9)', 'min9', 'min/b3', 'minmaj7', 'sus2', 'sus4', 'sus4(b7)', 'sus4(b7,9)', '7', '7/3', '7/5', '7(#9)', '7/b7', '9', '11', '13']
pom/chord_types.txt ADDED
File without changes
pom/chords.py ADDED
@@ -0,0 +1,438 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #! /usr/bin/env python3
2
+ import re
3
+ import sys
4
+ import torch.nn as nn
5
+ import torch
6
+ from PIL import Image
7
+ import numpy as np
8
+ from control_toys.utils import rect_to_square, square_to_rect
9
+
10
+ CHORD_BORDER = 8 # chord border size in pixels
11
+
12
+ # my distillation of all output from polyffusion's chord finder for transposed +/-12 semitones POP909 dataset.
13
+ NOTE_NAMES = ['C','C#','D','E','Eb','F','F#','G', 'Ab', 'A', 'Bb', 'B'] # these are from polyffusion's chord finder. yes, mixing # & b is weird
14
+ #NOTE_NAMES2 = ['A','Ab','B','Bb','C','C#','D','E','Eb','F','F#','G'] # how they are in all_chords.txt file
15
+
16
+ CHORD_TYPES = ['aug', 'dim', 'dim7', 'hdim7',
17
+ 'maj', 'maj(11)', 'maj13', 'maj/3', 'maj/5', 'maj6', 'maj6(9)', 'maj7', 'maj7/3', 'maj7/5', 'maj7/7', 'maj(9)', 'maj9', 'maj9(11)',
18
+ 'min', 'min(11)', 'min11', 'min13', 'min/5', 'min6', 'min6(9)', 'min7', 'min7/5', 'min7/b7', 'min(9)', 'min9', 'min/b3', 'minmaj7',
19
+ 'sus2', 'sus4', 'sus4(b7)', 'sus4(b7,9)', '7', '7/3', '7/5', '7(#9)', '7/b7', '9', '11', '13'] # 44 chord types
20
+
21
+ CHORD_IND_PAIRS = [(note, chord) for note in NOTE_NAMES for chord in CHORD_TYPES]
22
+ POSSIBLE_CHORDS = [f"{note}:{chord}" for (note, chord) in CHORD_IND_PAIRS]
23
+ #POSSIBLE_CHORDS = [f"{note}:{chord}" for note in NOTE_NAMES for chord in CHORD_TYPES]
24
+ POSSIBLE_CHORDS += ['N'] # N for no chord
25
+ assert len(POSSIBLE_CHORDS) == 12*44+1, f"There should be {12*44+1} possible chords, but there are {len(POSSIBLE_CHORDS)}. Check the NOTE_NAMES and CHORD_TYPES lists."
26
+
27
+
28
+ def to_base_9(n):
29
+ # converts a decimal integer to base 9
30
+ if n == 0: return [0, 0, 0]
31
+ digits = []
32
+ while n:
33
+ digits.append(n % 9)
34
+ n //= 9
35
+ while len(digits) < 3: # add leading zeros
36
+ digits.append(0)
37
+ return digits[::-1]
38
+
39
+
40
+ def chord_num_to_color(cn, scale=30):
41
+ # "embeddings" for chords, from (0,0,30) up to (240,240,240) in each (RGB) channel, in steps of 30
42
+ color = to_base_9(cn+1)
43
+ return tuple(x*scale for x in color)
44
+
45
+ def color_to_chord_num(color, scale=30, warnings_on=False):
46
+ # reverse of chord_num_to_color, note that color goes backwards
47
+ out = sum([x//scale * 9**i for i, x in enumerate(color[::-1])])-1
48
+ if out < 0:
49
+ if warnings_on: print(f"color_to_chord_num: Warning: out should be equal to or greater than 0: color = {color}, out = {out}. Wrapping around to {len(POSSIBLE_CHORDS)+out}")
50
+ out = len(POSSIBLE_CHORDS) + out
51
+ return out
52
+
53
+
54
+ def simplify_chord(chord_name):
55
+ """Simplifies chord names by applying a few rules:
56
+ 1. get rid of the ones with parentheses, e.g. change "A:maj(11)" to just "A:maj"?
57
+ 2. remove the notes in the bass, like mapping all "A:7/3", "A:7/5" and "A:7/b7" to just "A:7"?
58
+ 3. remove uspension markings, e.g. sus2, sus4?
59
+ 4. maybe? high-numbered added notes like "G:min11" & "G:min13" -> "G:min"
60
+ """
61
+ chord_name = re.sub(r'\(.*','',chord_name) # 1
62
+ chord_name = re.sub(r'\/.*','',chord_name) # 2
63
+ chord_name = re.sub(r'sus.*','',chord_name) # 3
64
+ return chord_name
65
+
66
+
67
+
68
+
69
+ def get_unique_indices(data):
70
+ """Returns the indices of non-repeating values in a list
71
+ Args:
72
+ data: A list of any data type.
73
+ Example: data = [0, 1, 4, 1, 5, 5, 5, 6, 10, 6, 6, 5]
74
+
75
+ Returns:
76
+ A list of indices for non-repeating values.
77
+ Example: result = [0, 1, 2, 3, 6, 7, 8, 10, 11]
78
+ """
79
+ return [i for i, (val, next_val) in enumerate(zip(data, data[1:])) if val != next_val] + [len(data) - 1]
80
+
81
+ def get_nonrepeated_values(data, indices=None):
82
+ """Returns the indices of non-repeating values in a list
83
+ Args:
84
+ data: A list of any data type.
85
+ Example: data = [0, 1, 4, 1, 5, 5, 5, 6, 10, 6, 6, 5]
86
+
87
+ Returns:
88
+ A list of non-repeating values.
89
+ Example: returns [0, 1, 4, 1, 5, 6, 10, 6, 5]
90
+ """
91
+ if indices is None:
92
+ indices = get_unique_indices(data)
93
+ return [data[i] for i in indices]
94
+
95
+
96
+
97
+ def most_freq_or_first(arr, debug=False):
98
+ "returns either the most frequent value in array, or if multiple values are most frequent, it returns the first such value"
99
+ assert len(arr.shape) == 1, "arr must be 1D"
100
+ savearr = arr.copy()
101
+ if debug:
102
+ print("most_freq_or_first: arr = ", arr)
103
+ if savearr.min() < 0: # if there are negative values, we need to shift them up to 0
104
+ arr = arr - savearr.min()
105
+ bc = np.bincount(arr)
106
+ try:
107
+
108
+ if np.any(arr < 0): bc[arr < 0] = 0 # don't inlcude negative arr values when checking for most frequent
109
+ bc[bc != bc.max()] = 0 # only interested in most frequent values
110
+ except Exception as e:
111
+ print("Exception ",e)
112
+ print("most_freq_or_first: arr.shape = ", arr.shape)
113
+ print("most_freq_or_first: arr = ", arr )
114
+ print("most_freq_or_first: bc.shape = ", bc.shape)
115
+ raise e
116
+ out = np.argmax(bc)
117
+ # shift numbers back down
118
+ if savearr.min() < 0:
119
+ out = out + savearr.min()
120
+ assert out.max() <= arr.max(), f"out.max() = {out.max()} should be less than arr.max() = {arr.max()}"
121
+ return out
122
+
123
+
124
+ def most_freq_or_first_every(arr,
125
+ every=4, # pixels per chord label. 4=every quarter note
126
+ ):
127
+ assert len(arr.shape) == 1, "arr must be 1D"
128
+ "used to grab most frequent chord labels, assuming we're starting on a beat. arr=chord label indices, e.g. in 0..528"
129
+ remainder = len(arr) % every
130
+ if remainder != 0:
131
+ arr = np.pad(arr, (0, every - remainder), mode='constant', constant_values=(0, arr[- remainder]))
132
+ #print("most_freq_or_first_every: Warning: Padding arr with last beat value on end. new arr =",arr)
133
+ check = arr.reshape((-1,every))
134
+ out = np.array( [most_freq_or_first(a) for a in arr.reshape((-1,every))] )
135
+ if out.max() > arr.max():
136
+ for i, c in enumerate(check):
137
+ mfof = most_freq_or_first(c)
138
+ if mfof > c.max():
139
+ print(f"i={i}, c={c}, most_freq_or_first(c)={mfof}")
140
+ raise ValueError(f"out.max() = {out.max()} should be less than arr.max() = {arr.max()}")
141
+
142
+ return out
143
+
144
+
145
+ def chord_str_to_pair(chord_str):
146
+ "converts a chord string to a pair of (note, chord) indices"
147
+ if chord_str == 'N':
148
+ return (-1,-1)
149
+ note, chord_type = chord_str.split(':')
150
+ note_ind = NOTE_NAMES.index(note)
151
+ chord_type_ind = CHORD_TYPES.index(chord_type)
152
+ return (note_ind, chord_type_ind)
153
+
154
+ def chords_str_to_pairs(chords_str):
155
+ for chord_str in chords_str.split(','):
156
+ yield chord_str_to_pair(chord_str)
157
+
158
+ def chords_str_to_inds(chords_str):
159
+ for chord_str in chords_str.split(','):
160
+ yield POSSIBLE_CHORDS.index(chord_str)
161
+
162
+ def pair_to_chord_index(pair):
163
+ "converts a pair of (note, chord_type) indices to a single chord index"
164
+ note_ind, chord_type_ind = pair
165
+ return note_ind*len(CHORD_TYPES) + chord_type_ind
166
+
167
+ def chord_index_to_pair(ci):
168
+ "converts a single chord index to a pair of (note, chord) indices"
169
+ note_ind = ci // len(CHORD_TYPES)
170
+ chord_type_ind = ci % len(CHORD_TYPES)
171
+ return (note_ind, chord_type_ind)
172
+
173
+ def chord_index_to_str(ci):
174
+ "converts a single chord index to a chord string"
175
+ return POSSIBLE_CHORDS[ci]
176
+
177
+
178
+ class ChordEmbedding(nn.Module):
179
+ def __init__(self, chord_emb_dim=8, note_emb_dim=8, type_emb_dim=8, debug=False):
180
+ super(ChordEmbedding, self).__init__()
181
+ self.emb_note = nn.Embedding(len(NOTE_NAMES)+1, note_emb_dim) # +1 for "N" ie no chord"
182
+ self.emb_type = nn.Embedding(len(CHORD_TYPES), type_emb_dim)
183
+ self.compactify = nn.Linear(note_emb_dim + type_emb_dim, chord_emb_dim)
184
+ self.chord_emb_dim = chord_emb_dim
185
+ self.debug = debug
186
+ self.zero_vec = torch.zeros((1, self.chord_emb_dim))
187
+ self.chord_emb_dim = chord_emb_dim
188
+
189
+ def forward(self, chord_inds:torch.Tensor, debug=False):
190
+ """x should have dimensions (B) where B is the batch size each value is the index of the chord in the vocabulary
191
+ Any note wherever inds is len(POSSIBLE_CHORDS), we want to return a zero vector, otherwise we want to return the embedding"""
192
+ if chord_inds.max() > len(POSSIBLE_CHORDS):
193
+ torch.set_printoptions(threshold=10000)
194
+ print(f"\nchord_inds.max() = {chord_inds.max()} but len(POSSIBLE_CHORDS) = {len(POSSIBLE_CHORDS)}. \nchord_inds = {chord_inds}")
195
+ raise ValueError("chord_inds.max() should be less than len(POSSIBLE_CHORDS)")
196
+ note_inds, type_inds = chord_inds // len(CHORD_TYPES), chord_inds % len(CHORD_TYPES)
197
+ # note that for 'N' chord in which chord_ind==len(POSSIBLE_CHORDS)-1, we will get note_inds=LEN(NOTE_NAMES) and type_inds=0. that's why self.embed_note has len(NOTE_NAMES)+1
198
+ if debug:
199
+ print("note_inds, type_inds = ", note_inds, type_inds)
200
+ print("note_inds.max(), type_inds.max() = ", note_inds.max(), type_inds.max())
201
+ note_emb = self.emb_note(note_inds)
202
+ type_emb = self.emb_type(type_inds)
203
+ if debug: print("\nnote_emb.shape, type_emb.shape = ", note_emb.shape, type_emb.shape)
204
+ combined_emb = torch.cat((note_emb, type_emb), dim=1)
205
+ if debug: print("combined_emb.shape = ", combined_emb.shape)
206
+ x = self.compactify(combined_emb)
207
+ if debug: print("ce: x.shape, self.chord_emb_dim = ", x.shape, self.chord_emb_dim)
208
+ return x
209
+
210
+
211
+ class ChordAE(nn.Module):
212
+ """Maybe not needed: Autoencoder for training chord embeddings?
213
+ Note: we don't really need an AE for the full model, we can get by with just the encoder (and no decoder)
214
+ but the AE is useful for exploring how few dimensions we can get away with"""
215
+ def __init__(self, chord_vocab_size=len(POSSIBLE_CHORDS), chord_emb_dim=8):
216
+ super(ChordAE, self).__init__()
217
+ self.encoder = ChordEmbedding(chord_emb_dim)
218
+ self.decoder = nn.Linear(chord_emb_dim, chord_vocab_size) # could do better maybe
219
+ def forward(self, x, debug=False):
220
+ x = self.encoder(x)
221
+ x = self.decoder(x)
222
+ return x
223
+
224
+ def abs_seq_to_rel_seq(seq:torch.Tensor):
225
+ """converts a batch of absolute sequences of chord indices to a batch of relative sequence of chord indices
226
+ subtract the note of the first element in each batch from all the other note indices, modulo len(NOTE_NAMES)
227
+ overwrite the first element so it's unchanged, and overwrite and 'N' chords with...something else? TODO
228
+ """
229
+ assert len(seq.shape)==2, f"seq should be 2D, but seq.shape = {seq.shape}"
230
+ # decompose seq into two tensors, one of notes and one of chord types
231
+ note_inds, type_inds = seq // len(CHORD_TYPES), seq % len(CHORD_TYPES)
232
+ # for note_inds<12, subtract these from the first element in the sequence, modulo len(NOTE_NAMES) i.e. 12
233
+ note_inds2 = note_inds.clone()
234
+ note_inds2[:,1:] = (note_inds2[:,1:] - note_inds2[:,0].unsqueeze(1)) % len(NOTE_NAMES)
235
+ # 'N' chords: whereever note_inds == 12, overwrite note_inds2 with 12
236
+ note_inds2[note_inds == len(NOTE_NAMES)] = len(NOTE_NAMES)
237
+ # recompose seq
238
+ changes_seq = note_inds2 * len(CHORD_TYPES) + type_inds # now these are no longer chords, they are chord *changes* rel to first chord
239
+ return changes_seq
240
+
241
+
242
+
243
+
244
+ class ChordSeqEncoder(nn.Module):
245
+ """Encoder for sequences of chords:
246
+ We embed the first chord, then we embed the CHANGES in chords thereafter (using modulo-12 arithmetic on the bass note)
247
+ (4 chords per bar x 32 bars = 128 chords),
248
+ and then pass the sequence of the chords through some sequence model
249
+ (LSTM for now, could use a Transformer or something else later)
250
+ to generate a [256]-dimensional embedding of the sequence of chord embeddings
251
+ """
252
+ def __init__(self, chord_emb_dim=8, seq_len=512//4, seq_emb_dim=256, hidden_dim=512, dropout=0.2):
253
+ super(ChordSeqEncoder, self).__init__()
254
+ self.chord_encoder = ChordEmbedding()
255
+ self.seq_encoder = nn.LSTM(chord_emb_dim, seq_emb_dim, batch_first=True, num_layers=2, dropout=dropout)
256
+ self.seq_len = seq_len
257
+ def forward(self, bs):
258
+ "x should have dimensions (B, S) where B is the batch size and S is the length of the sequence of chord indices"
259
+ B,S = bs.shape
260
+ changes_seq = abs_seq_to_rel_seq(bs) # convert to relative sequence of chord indices
261
+ # get chord embeddings for every chord in the batch in the sequence
262
+ x = self.chord_encoder(changes_seq.flatten())
263
+ # reshape x into (B, S, E) where B is the batch size, S is the sequence length, and E is the chord embedding dimension
264
+ x = x.view(B, S, -1)
265
+ E = x.shape[-1]
266
+ #print("before seq_encoder, x.shape = ", x.shape)
267
+ #x, _ = self.seq_encoder(x)
268
+ output, (hidden, cell) = self.seq_encoder(x)
269
+
270
+ #output of forward should be a 2-D tensor of shape (B, SE) where SE = seq_emb_dim
271
+ x = hidden[0, :, :] # return the hidden state of the LSTM, which is the last state of the sequence
272
+ #print("after seq_encoder, x.shape = ", x.shape)
273
+ return x
274
+
275
+
276
+ class ChordSeqAE(nn.Module):
277
+ """
278
+ Chord Sequence Autoencoder. For pretraining a ChordSeqEncoder
279
+ """
280
+ def __init__(self, chord_emb_dim=8, seq_len=512//4, seq_emb_dim=256,
281
+ hidden_dim=512, chord_vocab_size=len(POSSIBLE_CHORDS),
282
+ vae_scale=0.1):
283
+ super(ChordSeqAE, self).__init__()
284
+ self.encoder = ChordSeqEncoder(chord_emb_dim=chord_emb_dim, seq_len=seq_len, seq_emb_dim=seq_emb_dim, hidden_dim=hidden_dim)
285
+ # made decoder a sequence of linear layers with a ReLU in between
286
+ self.decoder = nn.Sequential(
287
+ nn.Linear(seq_emb_dim, hidden_dim),
288
+ nn.ReLU(),
289
+ nn.Linear(hidden_dim, seq_len*chord_vocab_size)
290
+ )
291
+ self.chord_vocab_size = chord_vocab_size
292
+ self.vae_scale = vae_scale
293
+
294
+ def forward(self, bs, debug=False):
295
+ "x should have dimensions (B, S) where B is the batch size and S is the length of the sequence of chord indices"
296
+ if debug: print("ChordSeqAE: bs.shape = ", bs.shape)
297
+ B,S = bs.shape
298
+ x = self.encoder(bs)
299
+ if debug: print("ChordSeqAE: encoded x.shape = ", x.shape)
300
+ if self.vae_scale > 0 and self.training:
301
+ x = x + self.vae_scale*((x.max()-x.min())) * torch.randn_like(x)
302
+ x = self.decoder(x)
303
+ x = x.view(B, S, -1)
304
+ if debug: print("ChordSeqAE: decoded x.shape = ", x.shape)
305
+ return x
306
+
307
+ def chord_seq_from_img(img:Image.Image,
308
+ every=8, # was imaginging every beat (every=4) but looking at data, it seems like the smallest chord label is 8 pixels wide
309
+ debug=False):
310
+ """extracts a sequence of chord indices from a pianoroll image
311
+ hopefully the dataloader will mean we can just do one image and it'll batch them
312
+ """
313
+ if debug: print("img.size, img.min, img.max = ",img.size, np.array(img).min(), np.array(img).max())
314
+ if img.size[0] == img.size[1]: # if image is square, make it rectangular
315
+ img = square_to_rect(img)
316
+ img_arr = np.array(img)
317
+ top_row = img_arr[CHORD_BORDER//2] # all x's along y=CHORD_BORDER/2
318
+ if debug:
319
+ img.save("chord_seq_from_img.png")
320
+ print("img_arr.shape = ", img_arr.shape)
321
+ print("top_row.shape = ", top_row.shape)
322
+ print("top_row = ", top_row)
323
+ chord_seq = np.array([color_to_chord_num(tuple(c)) for c in top_row])
324
+ if chord_seq.max() >= len(POSSIBLE_CHORDS):
325
+ print(f"chord_seq.max = {chord_seq.max()} should be less than len(POSSIBLE_CHORDS) = {len(POSSIBLE_CHORDS)}\nchord_seq = {chord_seq}")
326
+ indices = np.where(chord_seq >= len(POSSIBLE_CHORDS))[0]
327
+ print("indices, chord_seq[indices], top_row[indices] = ", indices, chord_seq[indices], top_row[indices])
328
+ raise ValueError("chord_seq.max() should be less than len(POSSIBLE_CHORDS)")
329
+ chord_seq_beats = most_freq_or_first_every(chord_seq, every=every)
330
+ assert chord_seq_beats.max() <= chord_seq.max(), f"chord_seq_beats.max() = {chord_seq_beats.max()} should be less than chord_seq.max() = {chord_seq.max()}"
331
+ if debug: print("chord_seq_beats, len(POSSIBLE_CHORDS) = ", chord_seq_beats, len(POSSIBLE_CHORDS))
332
+ assert chord_seq_beats.max() < len(POSSIBLE_CHORDS), f"chord_seq_beats.max() should be less than len(POSSIBLE_CHORDS) = {len(POSSIBLE_CHORDS)}"
333
+ return torch.tensor(chord_seq_beats)
334
+
335
+
336
+ def chord_seq_from_img_tensor_batch(img_tensor_batch:torch.Tensor, every=8, debug=False):
337
+ """extracts a sequence of chord indices from a batch of pianoroll images"""
338
+ batch_size = img_tensor_batch.shape[0]
339
+ itb = (img_tensor_batch + 1.0) * 127.5 #rescale from -1..1 to 0..255
340
+ chord_seqs = []
341
+ for i in range(batch_size): # TODO: may be a faster way to do this with tensor ops
342
+ # converting to images and back is slow this is slow
343
+ img = Image.fromarray(np.round( itb[i].cpu().permute(1,2,0).numpy()).astype(np.uint8))
344
+ img = square_to_rect(img)
345
+ chord_seq = chord_seq_from_img(img, every=every, )
346
+ chord_seqs.append(chord_seq)
347
+ return torch.stack(chord_seqs).to(img_tensor_batch.device)
348
+
349
+ def img_batch_to_seq_emb(img_tensor_batch:torch.Tensor, chord_seq_encoder:nn.Module, every=8, debug=False):
350
+ """converts a batch of pianoroll images to a batch of chord sequence embeddings"""
351
+ chord_seq_batch = chord_seq_from_img_tensor_batch(img_tensor_batch, every=every, debug=debug)
352
+ cs_emb = chord_seq_encoder(chord_seq_batch)
353
+ return cs_emb
354
+
355
+ # TODO: test it!
356
+
357
+ if __name__ == '__main__':
358
+ # FOR TESTING/DEV ONLY
359
+ import sys, random
360
+
361
+ def make_image_tensor_batch(batch_size=2):
362
+ """FOR TESTING/DEV ONLY: makes a batch of random chord-endowed pianoroll (square) images
363
+ So I can iterate other parts of this faster w/o having to spin up crowson's training code every time while i write code here
364
+ shape = (B, 3, 256, 256), normalization = -1.0 to 1.0
365
+ """
366
+ img_batch = torch.zeros((batch_size, 3, 256, 256))
367
+ for i in range(batch_size):
368
+ n = i+1# np.random.randint(0, 909)
369
+ img_filename = f"/data/POP909-Dataset/images_128_rg_chords_TOTAL/{n:03}_TOTAL.png" # place to grab images from
370
+ img = Image.open(img_filename).convert('RGB')
371
+ # crop to 512 pixels wide
372
+ img = img.crop((0,0,512,128))
373
+ img = rect_to_square(img)
374
+ img_batch[i] = torch.tensor(np.array(img)).permute(2,0,1).float() / 127.5 - 1.0 # normalization done by dataloader makes images -1 to 1
375
+ return img_batch
376
+
377
+ # quick check of the mapping
378
+ for cn in range(len(POSSIBLE_CHORDS)):
379
+ color = chord_num_to_color(cn)
380
+ print("cn, color = ", cn, color)
381
+ cn2 = color_to_chord_num(color)
382
+ assert cn2 == cn, f"cn2={cn2} should be cn={cn}, color={color}"
383
+
384
+
385
+ if len(sys.argv) <= 1:
386
+ print("Testing suite, Usage: python chords.py <some_arg>")
387
+ sys.exit(1)
388
+ some_arg = sys.argv[1]
389
+
390
+ batch_size=2
391
+ img_tensor_batch = make_image_tensor_batch(batch_size=batch_size)
392
+ print("img_tensor_batch.shape = ", img_tensor_batch.shape)
393
+ print("img_tensor_batch.min(), img_tensor_batch.max() = ", img_tensor_batch.min(), img_tensor_batch.max())
394
+
395
+ chord_seq_batch = chord_seq_from_img_tensor_batch(img_tensor_batch, every=8, debug=False)
396
+
397
+ print("chord_seq_batch.shape = ", chord_seq_batch.shape)
398
+ print(f"chord_seq_batch = \n{chord_seq_batch}")
399
+
400
+
401
+ cse = ChordSeqEncoder()
402
+ cs_emb = cse(chord_seq_batch)
403
+
404
+ print("cs_emb.shape = ", cs_emb.shape)
405
+ #print(f"cs_emb = \n{cs_emb}")
406
+ sys.exit(0)
407
+
408
+
409
+
410
+
411
+ #img_filename = some_arg
412
+ img = Image.open(img_filename).convert('RGB')
413
+ chord_ind_seq = chord_seq_from_img(img, debug=False)
414
+ print("chord_ind_seq = ", chord_ind_seq)
415
+ print("len(chord_ind_seq) = ", len(chord_ind_seq))
416
+ chord_embedder = ChordEmbedding(len(POSSIBLE_CHORDS))
417
+ #print("chord_embeddings = ", chord_embedder(chord_ind_seq))
418
+ sys.exit(0)
419
+ #chords_str = some_arg
420
+ #cis = chords_str_to_inds(chords_str)
421
+ cis = chord_ind_seq
422
+ for ci in cis:
423
+ print("\n-------")
424
+ #ci = pair_to_chord_index(pair)
425
+ pair = chord_index_to_pair(ci)
426
+ print(f"Input: chord_str = {chords_str}, pair = {pair}, ci = {ci}")
427
+ color = chord_num_to_color(ci)
428
+ print(color)
429
+ cn2 = color_to_chord_num(color)
430
+ out_str = chord_index_to_str(cn2)
431
+ print(f"Output: cn2 = {cn2}, out_str = {out_str}")
432
+
433
+ print("Embedding: ")
434
+ with torch.no_grad():
435
+ x = torch.tensor([ci])
436
+ print(chord_embedder(x))
437
+
438
+
pom/chords.txt ADDED
@@ -0,0 +1,528 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 11
2
+ 13
3
+ 7
4
+ 7/3
5
+ 7/5
6
+ 7(#9)
7
+ 7/b7
8
+ 9
9
+ aug
10
+ 11
11
+ 13
12
+ 7
13
+ 7/3
14
+ 7/5
15
+ 7(#9)
16
+ 7/b7
17
+ 9
18
+ aug
19
+ dim
20
+ dim7
21
+ hdim7
22
+ maj
23
+ maj(11)
24
+ maj13
25
+ maj/3
26
+ maj/5
27
+ maj6
28
+ maj6(9)
29
+ maj7
30
+ maj7/3
31
+ maj7/5
32
+ maj7/7
33
+ maj(9)
34
+ maj9
35
+ maj9(11)
36
+ min
37
+ min(11)
38
+ min11
39
+ min13
40
+ min/5
41
+ min6
42
+ min6(9)
43
+ min7
44
+ min7/5
45
+ min7/b7
46
+ min(9)
47
+ min9
48
+ min/b3
49
+ minmaj7
50
+ sus2
51
+ sus4
52
+ sus4(b7)
53
+ sus4(b7,9)
54
+ dim
55
+ dim7
56
+ hdim7
57
+ maj
58
+ maj(11)
59
+ maj13
60
+ maj/3
61
+ maj/5
62
+ maj6
63
+ maj6(9)
64
+ maj7
65
+ maj7/3
66
+ maj7/5
67
+ maj7/7
68
+ maj(9)
69
+ maj9
70
+ maj9(11)
71
+ min
72
+ min(11)
73
+ min11
74
+ min13
75
+ min/5
76
+ min6
77
+ min6(9)
78
+ min7
79
+ min7/5
80
+ min7/b7
81
+ min(9)
82
+ min9
83
+ min/b3
84
+ minmaj7
85
+ sus2
86
+ sus4
87
+ sus4(b7)
88
+ sus4(b7,9)
89
+ 11
90
+ 13
91
+ 7
92
+ 7/3
93
+ 7/5
94
+ 7(#9)
95
+ 7/b7
96
+ 9
97
+ aug
98
+ 11
99
+ 13
100
+ 7
101
+ 7/3
102
+ 7/5
103
+ 7(#9)
104
+ 7/b7
105
+ 9
106
+ aug
107
+ dim
108
+ dim7
109
+ hdim7
110
+ maj
111
+ maj(11)
112
+ maj13
113
+ maj/3
114
+ maj/5
115
+ maj6
116
+ maj6(9)
117
+ maj7
118
+ maj7/3
119
+ maj7/5
120
+ maj7/7
121
+ maj(9)
122
+ maj9
123
+ maj9(11)
124
+ min
125
+ min(11)
126
+ min11
127
+ min13
128
+ min/5
129
+ min6
130
+ min6(9)
131
+ min7
132
+ min7/5
133
+ min7/b7
134
+ min(9)
135
+ min9
136
+ min/b3
137
+ minmaj7
138
+ sus2
139
+ sus4
140
+ sus4(b7)
141
+ sus4(b7,9)
142
+ dim
143
+ dim7
144
+ hdim7
145
+ maj
146
+ maj(11)
147
+ maj13
148
+ maj/3
149
+ maj/5
150
+ maj6
151
+ maj6(9)
152
+ maj7
153
+ maj7/3
154
+ maj7/5
155
+ maj7/7
156
+ maj(9)
157
+ maj9
158
+ maj9(11)
159
+ min
160
+ min(11)
161
+ min11
162
+ min13
163
+ min/5
164
+ min6
165
+ min6(9)
166
+ min7
167
+ min7/5
168
+ min7/b7
169
+ min(9)
170
+ min9
171
+ min/b3
172
+ minmaj7
173
+ sus2
174
+ sus4
175
+ sus4(b7)
176
+ sus4(b7,9)
177
+ 11
178
+ 11
179
+ 13
180
+ 13
181
+ 7
182
+ 7
183
+ 7/3
184
+ 7/3
185
+ 7/5
186
+ 7/5
187
+ 7(#9)
188
+ 7(#9)
189
+ 7/b7
190
+ 7/b7
191
+ 9
192
+ 9
193
+ aug
194
+ aug
195
+ dim
196
+ dim
197
+ dim7
198
+ dim7
199
+ hdim7
200
+ hdim7
201
+ maj
202
+ maj
203
+ maj(11)
204
+ maj(11)
205
+ maj13
206
+ maj13
207
+ maj/3
208
+ maj/3
209
+ maj/5
210
+ maj/5
211
+ maj6
212
+ maj6
213
+ maj6(9)
214
+ maj6(9)
215
+ maj7
216
+ maj7
217
+ maj7/3
218
+ maj7/3
219
+ maj7/5
220
+ maj7/5
221
+ maj7/7
222
+ maj7/7
223
+ maj(9)
224
+ maj9
225
+ maj(9)
226
+ maj9
227
+ maj9(11)
228
+ maj9(11)
229
+ min
230
+ min
231
+ min(11)
232
+ min11
233
+ min(11)
234
+ min11
235
+ min13
236
+ min13
237
+ min/5
238
+ min/5
239
+ min6
240
+ min6
241
+ min6(9)
242
+ min6(9)
243
+ min7
244
+ min7
245
+ min7/5
246
+ min7/5
247
+ min7/b7
248
+ min7/b7
249
+ min(9)
250
+ min9
251
+ min(9)
252
+ min9
253
+ min/b3
254
+ min/b3
255
+ minmaj7
256
+ minmaj7
257
+ sus2
258
+ sus2
259
+ sus4
260
+ sus4
261
+ sus4(b7)
262
+ sus4(b7)
263
+ sus4(b7,9)
264
+ sus4(b7,9)
265
+ 11
266
+ 13
267
+ 7
268
+ 7/3
269
+ 7/5
270
+ 7(#9)
271
+ 7/b7
272
+ 9
273
+ aug
274
+ dim
275
+ dim7
276
+ hdim7
277
+ maj
278
+ maj(11)
279
+ maj13
280
+ maj/3
281
+ maj/5
282
+ maj6
283
+ maj6(9)
284
+ maj7
285
+ maj7/3
286
+ maj7/5
287
+ maj7/7
288
+ maj(9)
289
+ maj9
290
+ maj9(11)
291
+ min
292
+ min(11)
293
+ min11
294
+ min13
295
+ min/5
296
+ min6
297
+ min6(9)
298
+ min7
299
+ min7/5
300
+ min7/b7
301
+ min(9)
302
+ min9
303
+ min/b3
304
+ minmaj7
305
+ sus2
306
+ sus4
307
+ sus4(b7)
308
+ sus4(b7,9)
309
+ 11
310
+ 13
311
+ 7
312
+ 7/3
313
+ 7/5
314
+ 7(#9)
315
+ 7/b7
316
+ 9
317
+ aug
318
+ 11
319
+ 13
320
+ 7
321
+ 7/3
322
+ 7/5
323
+ 7(#9)
324
+ 7/b7
325
+ 9
326
+ aug
327
+ dim
328
+ dim7
329
+ hdim7
330
+ maj
331
+ maj(11)
332
+ maj13
333
+ maj/3
334
+ maj/5
335
+ maj6
336
+ maj6(9)
337
+ maj7
338
+ maj7/3
339
+ maj7/5
340
+ maj7/7
341
+ maj(9)
342
+ maj9
343
+ maj9(11)
344
+ min
345
+ min(11)
346
+ min11
347
+ min13
348
+ min/5
349
+ min6
350
+ min6(9)
351
+ min7
352
+ min7/5
353
+ min7/b7
354
+ min(9)
355
+ min9
356
+ min/b3
357
+ minmaj7
358
+ sus2
359
+ sus4
360
+ sus4(b7)
361
+ sus4(b7,9)
362
+ dim
363
+ dim7
364
+ hdim7
365
+ maj
366
+ maj(11)
367
+ maj13
368
+ maj/3
369
+ maj/5
370
+ maj6
371
+ maj6(9)
372
+ maj7
373
+ maj7/3
374
+ maj7/5
375
+ maj7/7
376
+ maj(9)
377
+ maj9
378
+ maj9(11)
379
+ min
380
+ min(11)
381
+ min11
382
+ min13
383
+ min/5
384
+ min6
385
+ min6(9)
386
+ min7
387
+ min7/5
388
+ min7/b7
389
+ min(9)
390
+ min9
391
+ min/b3
392
+ minmaj7
393
+ sus2
394
+ sus4
395
+ sus4(b7)
396
+ sus4(b7,9)
397
+ 11
398
+ 11
399
+ 13
400
+ 13
401
+ 7
402
+ 7
403
+ 7/3
404
+ 7/3
405
+ 7/5
406
+ 7/5
407
+ 7(#9)
408
+ 7(#9)
409
+ 7/b7
410
+ 7/b7
411
+ 9
412
+ 9
413
+ aug
414
+ aug
415
+ dim
416
+ dim
417
+ dim7
418
+ dim7
419
+ hdim7
420
+ hdim7
421
+ maj
422
+ maj
423
+ maj(11)
424
+ maj(11)
425
+ maj13
426
+ maj13
427
+ maj/3
428
+ maj/3
429
+ maj/5
430
+ maj/5
431
+ maj6
432
+ maj6
433
+ maj6(9)
434
+ maj6(9)
435
+ maj7
436
+ maj7
437
+ maj7/3
438
+ maj7/3
439
+ maj7/5
440
+ maj7/5
441
+ maj7/7
442
+ maj7/7
443
+ maj(9)
444
+ maj9
445
+ maj(9)
446
+ maj9
447
+ maj9(11)
448
+ maj9(11)
449
+ min
450
+ min
451
+ min(11)
452
+ min11
453
+ min(11)
454
+ min11
455
+ min13
456
+ min13
457
+ min/5
458
+ min/5
459
+ min6
460
+ min6
461
+ min6(9)
462
+ min6(9)
463
+ min7
464
+ min7
465
+ min7/5
466
+ min7/5
467
+ min7/b7
468
+ min7/b7
469
+ min(9)
470
+ min9
471
+ min(9)
472
+ min9
473
+ min/b3
474
+ min/b3
475
+ minmaj7
476
+ minmaj7
477
+ sus2
478
+ sus2
479
+ sus4
480
+ sus4
481
+ sus4(b7)
482
+ sus4(b7)
483
+ sus4(b7,9)
484
+ sus4(b7,9)
485
+ 11
486
+ 13
487
+ 7
488
+ 7/3
489
+ 7/5
490
+ 7(#9)
491
+ 7/b7
492
+ 9
493
+ aug
494
+ dim
495
+ dim7
496
+ hdim7
497
+ maj
498
+ maj(11)
499
+ maj13
500
+ maj/3
501
+ maj/5
502
+ maj6
503
+ maj6(9)
504
+ maj7
505
+ maj7/3
506
+ maj7/5
507
+ maj7/7
508
+ maj(9)
509
+ maj9
510
+ maj9(11)
511
+ min
512
+ min(11)
513
+ min11
514
+ min13
515
+ min/5
516
+ min6
517
+ min6(9)
518
+ min7
519
+ min7/5
520
+ min7/b7
521
+ min(9)
522
+ min9
523
+ min/b3
524
+ minmaj7
525
+ sus2
526
+ sus4
527
+ sus4(b7)
528
+ sus4(b7,9)