MLX
Safetensors
llama

mlx-community/Yi-1.5-34B-Chat-4bit

The Model mlx-community/Yi-1.5-34B-Chat-4bit was converted to MLX format from 01-ai/Yi-1.5-34B-Chat using mlx-lm version 0.13.0.

Model added by Prince Canuma.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/Yi-1.5-34B-Chat-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
Downloads last month
15
Safetensors
Model size
5.37B params
Tensor type
FP16
·
U32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Collection including mlx-community/Yi-1.5-34B-Chat-4bit