|
--- |
|
language: |
|
- en |
|
license: cc-by-nc-4.0 |
|
tags: |
|
- merge |
|
- lazymergekit |
|
- dpo |
|
- rlhf |
|
- mlx |
|
dataset: |
|
- mlabonne/truthy-dpo-v0.1 |
|
- mlabonne/distilabel-intel-orca-dpo-pairs |
|
- mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha |
|
base_model: |
|
- mlabonne/NeuralMonarch-7B |
|
--- |
|
![Alt text](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/TI7C8F2gk43gmI9U2L0uk.jpeg) |
|
|
|
# mlx-community/AlphaMonarch-7B-mlx-4bit |
|
This model was converted to MLX format from [`mlabonne/AlphaMonarch-7B`](). |
|
Refer to the [original model card](https://huggingface.co/mlabonne/AlphaMonarch-7B) for more details on the model. |
|
## Use with mlx |
|
|
|
```bash |
|
pip install mlx-lm |
|
``` |
|
|
|
```python |
|
from mlx_lm import load, generate |
|
|
|
model, tokenizer = load("mlx-community/AlphaMonarch-7B-mlx-4bit") |
|
response = generate(model, tokenizer, prompt="hello", verbose=True) |
|
``` |
|
|