language: | |
- ja | |
- en | |
license: llama3.1 | |
tags: | |
- japanese | |
- llama | |
- llama-3 | |
- mlx | |
pipeline_tag: text-generation | |
inference: false | |
# mlx-community/Llama-3.1-70B-Japanese-Instruct-2407-8bit | |
The Model [mlx-community/Llama-3.1-70B-Japanese-Instruct-2407-8bit](https://huggingface.co/mlx-community/Llama-3.1-70B-Japanese-Instruct-2407-8bit) was converted to MLX format from [cyberagent/Llama-3.1-70B-Japanese-Instruct-2407](https://huggingface.co/cyberagent/Llama-3.1-70B-Japanese-Instruct-2407) using mlx-lm version **0.16.1**. | |
## Use with mlx | |
```bash | |
pip install mlx-lm | |
``` | |
```python | |
from mlx_lm import load, generate | |
model, tokenizer = load("mlx-community/Llama-3.1-70B-Japanese-Instruct-2407-8bit") | |
response = generate(model, tokenizer, prompt="hello", verbose=True) | |
``` | |