File size: 5,052 Bytes
adad449
 
3d67675
2a8b6eb
 
3d67675
 
adad449
3d67675
 
 
 
 
314fe48
3d67675
577c7a5
3d67675
 
 
 
 
 
 
 
 
31474ac
3d67675
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
---
license: mit
pipeline_tag: text-generation
widget:
- text: ""
tags:
- music
---

# TunesFormer

## Model description

TunesFormer is a Transformer-based melody generation system trained on 285,449 melodies with musical forms (represented by control codes), where all scores are represented in ABC notation. It was introduced in the paper [TunesFormer: Forming Tunes with Control Codes](https://arxiv.org/abs/2301.02884) by Wu et al. The code is released in [this repository](https://github.com/sander-wood/tunesformer), and the dataset is released in [huggingface](https://huggingface.co/datasets/sander-wood/abc_cc). 

By utilizing specific symbols commonly found in ABC notation to indicate section boundaries, TunesFormer can understand and generate melodies with given musical forms based on control codes. The checkpoint released here is TunesFormer-GP (Global Placement), where all the control codes are placed at the beginning of the ABC notation.

## Intended uses & limitations

You can use this model for melody generation conditioned on musical forms. All scores generated by this model can be written on one stave (for vocal solo or instrumental solo) in standard classical notation, and are in a variety of styles, e.g., blues, classical, folk, jazz, pop, and world music. The generated tunes are in ABC notation, and can be converted to sheet music or audio using [this website](https://ldzhangyx.github.io/abc/), or [this software](https://sourceforge.net/projects/easyabc/).

TunesFormer supports the generation of up to 8 sections, and up to 32 bars per section. In addition, although TunesFormer mostly generates music correctly according to the control codes, due to the random nature of sampling, the musical structure generated by the model occasionally does not match that specified by the control codes when more than 6 sections are generated, or when more than 17 bars are generated for a single section. For more information, please check [our paper](https://arxiv.org/abs/2301.02884).

### How to use

1. Install dependencies for the code released in [this repository](https://github.com/sander-wood/tunesformer):
```
torch                        1.9.1+cu111
samplings                    0.1.7
transformers                 4.18.0
```

2. Set the `control_codes` and `prompt` in the script `run_inference.py` for conditional music generation. 
```
control_codes = "[SECS_3][BARS_4][SIM_6][BARS_4][SIM_10][SIM_6][BARS_4]"
prompt = """L:1/4
M:4/4
K:C
"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 ||"""
```
For TunesFormer, the input is a concatenation of `control_codes` and `prompt`. Both `control_codes` and `prompt` are optional. However, if you need to set the prompt, you must set the control codes.
 
3. Run the script `run_inference.py`. When running a script for the first time, the downloaded files will be cached for future reuse.

```
python run_inference.py -num_tunes 3 -max_length 1024 -top_p 0.9 -temperature 1.0 -seed 1
```

4. Enjoy tunes in the folder `output_tunes`! If you want to convert these ABC tunes to sheet music or audio, please refer to `Intended uses & limitations`.
```
X:1
L:1/4
M:4/4
K:C
"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 ||"C" G G"F" A A |"G" G G"C" E2 | 
"G" F F"C" E E |"G" D D"C" C2 ||"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 |]

X:2
L:1/4
M:4/4
K:C
"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 ||"C" E E"F" F F |"C" G G"F" A2 | 
"G7" F F"C" E E |"G" D D"C" C2 ||"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 |]

X:3
L:1/4
M:4/4
K:C
"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 ||"C" G G"F" A A |"C" G G"F" F2 | 
"C" E E"G" D D |"G" D D"C" C2 ||"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 |]
```

### Usage
```
optional arguments:
  -h, --help            show this help message and exit
  -num_tunes NUM_TUNES  the number of independently computed returned tunes
  -max_length MAX_LENGTH
                        integer to define the maximum length in tokens of each
                        tune
  -top_p TOP_P          float to define the tokens that are within the sample
                        operation of text generation
  -temperature TEMPERATURE
                        the temperature of the sampling operation
  -seed SEED            seed for randomstate
```

### BibTeX entry and citation info

```bibtex
@misc{https://doi.org/10.48550/arxiv.2301.02884,
  doi = {10.48550/ARXIV.2301.02884},
  
  url = {https://arxiv.org/abs/2301.02884},
  
  author = {Wu, Shangda and Sun, Maosong},
  
  keywords = {Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
  
  title = {TunesFormer: Forming Tunes with Control Codes},
  
  publisher = {arXiv},
  
  year = {2023},
  
  copyright = {Creative Commons Attribution 4.0 International}
}
```