crypto-code commited on
Commit
6cacbd4
1 Parent(s): 888be28

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -0
README.md CHANGED
@@ -1,3 +1,30 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ # M<sup>2</sup>UGen Model with AudioLDM2-Music
6
+
7
+ The M<sup>2</sup>UGen model is a Music Understanding and Generation model that is capable of Music Question Answering and also Music Generation
8
+ from texts, images, videos and audios, as well as Music Editing. The model utilizes encoders such as MERT for music understanding, ViT for image understanding
9
+ and ViViT for video understanding and the MusicGen/AudioLDM2 model as the music generation model (music decoder), coupled with adapters and the LLaMA 2 model
10
+ to make the model possible for multiple abilities.
11
+
12
+ M<sup>2</sup>UGen was published in [M<sup>2</sup>UGen: Multi-modal Music Understanding and Generation with the Power of Large Language Models](https://arxiv.org/abs/2311.11255) by *Atin Sakkeer Hussain, Shansong Liu, Chenshuo Sun and Ying Shan*.
13
+
14
+ The code repository for the model is published in [crypto-code/M2UGen](https://github.com/crypto-code/M2UGen). Clone the repository, download the checkpoint and run the following for a model demo:
15
+ ```bash
16
+ python gradio_app.py --model ./ckpts/M2UGen-AudioLDM2/checkpoint.pth --llama_dir ./ckpts/LLaMA-2 --music_decoder audioldm2 --music_decoder_path cvssp/audioldm2-music
17
+ ```
18
+
19
+ ## Citation
20
+
21
+ If you find this model useful, please consider citing:
22
+
23
+ ```bibtex
24
+ @article{hussain2023m,
25
+ title={{M$^{2}$UGen: Multi-modal Music Understanding and Generation with the Power of Large Language Models}},
26
+ author={Hussain, Atin Sakkeer and Liu, Shansong and Sun, Chenshuo and Shan, Ying},
27
+ journal={arXiv preprint arXiv:2311.11255},
28
+ year={2023}
29
+ }
30
+ ```