license: apache-2.0 | |
tags: | |
- mlx | |
datasets: | |
- allenai/ai2_arc | |
- unalignment/spicy-3.1 | |
- codeparrot/apps | |
- facebook/belebele | |
- boolq | |
- jondurbin/cinematika-v0.1 | |
- drop | |
- lmsys/lmsys-chat-1m | |
- TIGER-Lab/MathInstruct | |
- cais/mmlu | |
- Muennighoff/natural-instructions | |
- openbookqa | |
- piqa | |
- Vezora/Tested-22k-Python-Alpaca | |
- cakiki/rosetta-code | |
- Open-Orca/SlimOrca | |
- spider | |
- squad_v2 | |
- migtissera/Synthia-v1.3 | |
- datasets/winogrande | |
- nvidia/HelpSteer | |
- Intel/orca_dpo_pairs | |
- unalignment/toxic-dpo-v0.1 | |
- jondurbin/truthy-dpo-v0.1 | |
- allenai/ultrafeedback_binarized_cleaned | |
# mlx-community/bagel-dpo-7b-v0.1-4bit-mlx | |
This model was converted to MLX format from [`jondurbin/bagel-dpo-7b-v0.1`](). | |
Refer to the [original model card](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1) for more details on the model. | |
## Use with mlx | |
```bash | |
pip install mlx-lm | |
``` | |
```python | |
from mlx_lm import load, generate | |
model, tokenizer = load("mlx-community/bagel-dpo-7b-v0.1-4bit-mlx") | |
response = generate(model, tokenizer, prompt="hello", verbose=True) | |
``` |