musicaudiopretrain
commited on
Commit
•
c5a8dd1
1
Parent(s):
50f1f9d
Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,7 @@ tags:
|
|
8 |
# Introduction to our series work
|
9 |
|
10 |
The development log of our Music Audio Pre-training (m-a-p) model family:
|
|
|
11 |
- 17/03/2023: we release two advanced music understanding models, [MERT-v1-95M](https://huggingface.co/m-a-p/MERT-v1-95M) and [MERT-v1-330M](https://huggingface.co/m-a-p/MERT-v1-330M) , trained with new paradigm and dataset. They outperform the previous models and can better generalize to more tasks.
|
12 |
- 14/03/2023: we retrained the MERT-v0 model with open-source-only music dataset [MERT-v0-public](https://huggingface.co/m-a-p/MERT-v0-public)
|
13 |
- 29/12/2022: a music understanding model [MERT-v0](https://huggingface.co/m-a-p/MERT-v0) trained with **MLM** paradigm, which performs better at downstream tasks.
|
@@ -112,10 +113,12 @@ print(weighted_avg_hidden_states.shape) # [768]
|
|
112 |
# Citation
|
113 |
|
114 |
```shell
|
115 |
-
@
|
116 |
-
|
117 |
-
|
118 |
-
|
|
|
|
|
|
|
119 |
}
|
120 |
-
|
121 |
```
|
|
|
8 |
# Introduction to our series work
|
9 |
|
10 |
The development log of our Music Audio Pre-training (m-a-p) model family:
|
11 |
+
- 02/06/2023: [arxiv pre-print](https://arxiv.org/abs/2306.00107) and training [codes](https://github.com/yizhilll/MERT) released.
|
12 |
- 17/03/2023: we release two advanced music understanding models, [MERT-v1-95M](https://huggingface.co/m-a-p/MERT-v1-95M) and [MERT-v1-330M](https://huggingface.co/m-a-p/MERT-v1-330M) , trained with new paradigm and dataset. They outperform the previous models and can better generalize to more tasks.
|
13 |
- 14/03/2023: we retrained the MERT-v0 model with open-source-only music dataset [MERT-v0-public](https://huggingface.co/m-a-p/MERT-v0-public)
|
14 |
- 29/12/2022: a music understanding model [MERT-v0](https://huggingface.co/m-a-p/MERT-v0) trained with **MLM** paradigm, which performs better at downstream tasks.
|
|
|
113 |
# Citation
|
114 |
|
115 |
```shell
|
116 |
+
@misc{li2023mert,
|
117 |
+
title={MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training},
|
118 |
+
author={Yizhi Li and Ruibin Yuan and Ge Zhang and Yinghao Ma and Xingran Chen and Hanzhi Yin and Chenghua Lin and Anton Ragni and Emmanouil Benetos and Norbert Gyenge and Roger Dannenberg and Ruibo Liu and Wenhu Chen and Gus Xia and Yemin Shi and Wenhao Huang and Yike Guo and Jie Fu},
|
119 |
+
year={2023},
|
120 |
+
eprint={2306.00107},
|
121 |
+
archivePrefix={arXiv},
|
122 |
+
primaryClass={cs.SD}
|
123 |
}
|
|
|
124 |
```
|