brevity-mlx / README.md
gsegato's picture
bd3a78be70e27672076dad9dc298fe128c651ed5ee9ef6bfbfc935b0245deda9
d6b8bd2 verified
|
raw
history blame
622 Bytes
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- mlx
---
# gsegato/brevity-mlx
The Model [gsegato/brevity-mlx](https://huggingface.co/gsegato/brevity-mlx) was converted to MLX format from [gsegato/brevity](https://huggingface.co/gsegato/brevity) using mlx-lm version **0.16.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("gsegato/brevity-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```