|
--- |
|
title: README |
|
emoji: π |
|
colorFrom: green |
|
colorTo: yellow |
|
sdk: static |
|
pinned: false |
|
--- |
|
|
|
We are a group of people working towards Music AGI (Artificial General Intelligence)~ We pre-train large music models (LMMs)~π₯ |
|
|
|
The development log of our Music Audio Pre-training (m-a-p) model family: |
|
- 02/06/2023: officially release the [MERt pre-print paper](https://arxiv.org/abs/2306.00107) and training [codes](https://github.com/yizhilll/MERT). |
|
- 17/03/2023: we release two advanced music understanding models, [MERT-v1-95M](https://huggingface.co/m-a-p/MERT-v1-95M) and [MERT-v1-330M](https://huggingface.co/m-a-p/MERT-v1-330M) , trained with new paradigm and dataset. They outperform the previous models and can better generalize to more tasks. |
|
- 14/03/2023: we retrained the MERT-v0 model with open-source-only music dataset [MERT-v0-public](https://huggingface.co/m-a-p/MERT-v0-public) |
|
- 29/12/2022: a music understanding model [MERT-v0](https://huggingface.co/m-a-p/MERT-v0) trained with **MLM** paradigm, which performs better at downstream tasks. |
|
- 29/10/2022: a pre-trained MIR model [music2vec](https://huggingface.co/m-a-p/music2vec-v1) trained with **BYOL** paradigm. |