File size: 563 Bytes
fe3e066
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
license: apache-2.0
tags:
- mlx
---

# GreenBitAI/Llama-3-8B-layer-mix-bpw-2.5-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Llama-3-8B-layer-mix-bpw-2.5`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Llama-3-8B-layer-mix-bpw-2.5) for more details on the model.
## Use with mlx

```bash
pip install gbx-lm
```

```python
from gbx_lm import load, generate

model, tokenizer = load("GreenBitAI/Llama-3-8B-layer-mix-bpw-2.5-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```