MLX example

#1
by paulmaksimovich - opened

The e5 approach is fascinating, it makes perfect sense. My mind is racing with the possibilities of applying that to this mistral remix, 4k embedding 32k context(is that right?)!

Any chance we could get a MLX example?

I’d love to run this along with a mistral instruct, coordinating as a RAG. Would be slick, as this is a heftier embedding model.

Thanks!

Hi @paulmaksimovich ,

Currently we do not have plans to include an MLX example, but we would appreciate community contributions.

Best,
Liang

HI @paulmaksimovich I've been working on this repository (https://github.com/riccardomusmeci/mlx-llm) and I recently added e5-mistral-7b-instruct.

Also, you can find a better example on mlx-community 🤗 model card: https://huggingface.co/mlx-community/e5-mistral-7b-instruct-mlx

Once mlx-llm is installed, it's easy to have the model in your hand with pre-trained (and converted) weights:

from mlx_llm.model import create_model

model = create_model("e5-mistral-7b-instruct") # it will automatically download weights from 🤗 hub

Thanks for the update @riccardomusmeci , much appreciated!

Sign up or log in to comment