README / README.md
awni's picture
Update README.md
4f89650 verified
---
title: README
emoji: πŸ“š
colorFrom: green
colorTo: indigo
sdk: static
pinned: false
---
# MLX Community
A community org for model weights compatible with [mlx-examples](https://github.com/ml-explore/mlx-examples) powered by [MLX](https://github.com/ml-explore/mlx).
These are pre-converted weights and ready to be used in the example scripts.
# Quick start for LLMs
Install `mlx-lm`:
```
pip install mlx-lm
```
You can use `mlx-lm` from the command line. For example:
```
mlx_lm.generate --model mlx-community/Mistral-7B-Instruct-v0.3-4bit --prompt "hello"
```
This will download a Mistral 7B model from the Hugging Face Hub and generate
text using the given prompt.
For a full list of options run:
```
mlx_lm.generate --help
```
To quantize a model from the command line run:
```
mlx_lm.convert --hf-path mistralai/Mistral-7B-Instruct-v0.3 -q
```
For more options run:
```
mlx_lm.convert --help
```
You can upload new models to Hugging Face by specifying `--upload-repo` to
`convert`. For example, to upload a quantized Mistral-7B model to the
[MLX Hugging Face community](https://huggingface.co/mlx-community) you can do:
```
mlx_lm.convert \
--hf-path mistralai/Mistral-7B-Instruct-v0.3 \
-q \
--upload-repo mlx-community/my-4bit-mistral
```
For more details on the API checkout the full [README](https://github.com/ml-explore/mlx-examples/tree/main/llms)
### Other Examples:
For more examples, visit the [MLX Examples](https://github.com/ml-explore/mlx-examples) repo. The repo includes examples of:
- Parameter efficient fine tuning with LoRA
- Speech recognition with Whisper
- Image generation with Stable Diffusion
and many other examples of different machine learning applications and algorithms.