README / README.md
yizhilll's picture
Update README.md
f1bf3bf verified
metadata
title: README
emoji: πŸ‘€
colorFrom: green
colorTo: yellow
sdk: static
pinned: false

Multimodal Art Projection (M-A-P)is an open-source AI research community.

The community members are working on research topics in a wide range of spectrum, including but not limited to pre-training paradigm of foundation models, large-scale data collection and processing, and the derived applciations on coding, reasoning and music generation.

The community is open to researchers keen on any relevant topic. Welcome to join us!

The development log of our Multimodal Art Projection (m-a-p) model family:

  • πŸ”₯08/05/2024: We release the fully transparent large language model MAP-Neo, series models for scaling law exploraltion and post-training alignment, and along with the training corpus Matrix.
  • πŸ”₯11/04/2024: MuPT paper and demo are out. HF collection.
  • πŸ”₯08/04/2024: Chinese Tiny LLM is out. HF collection.
  • πŸ”₯28/02/2024: The release of ChatMusician's demo, code, model, data, and benchmark. πŸ˜†
  • πŸ”₯23/02/2024: The release of OpenCodeInterpreter, beats GPT-4 code interpreter on HumanEval.
  • 23/01/2024: we release CMMMU for better Chinese LMMs' Evaluation.
  • 13/01/2024: we release a series of Music Pretrained Transformer (MuPT) checkpoints, with size up to 1.3B and 8192 context length. Our models are LLAMA2-based, pre-trained on world's largest 10B tokens symbolic music dataset (ABC notation format). We currently support Megatron-LM format and will release huggingface checkpoints soon.
  • 02/06/2023: officially release the MERT pre-print paper and training codes.
  • 17/03/2023: we release two advanced music understanding models, MERT-v1-95M and MERT-v1-330M , trained with new paradigm and dataset. They outperform the previous models and can better generalize to more tasks.
  • 14/03/2023: we retrained the MERT-v0 model with open-source-only music dataset MERT-v0-public
  • 29/12/2022: a music understanding model MERT-v0 trained with MLM paradigm, which performs better at downstream tasks.
  • 29/10/2022: a pre-trained MIR model music2vec trained with BYOL paradigm.