DeepSeek-V3-3bit / README.md
awni's picture
d7f9a0ac912167b210a795767556ec8086526af8d88ea62eda87c27b0b813f93
538703f verified
|
raw
history blame
759 Bytes
metadata
base_model: deepseek-ai/DeepSeek-V3
tags:
  - mlx

mlx-community/DeepSeek-V3-3bit

The Model mlx-community/DeepSeek-V3-3bit was converted to MLX format from deepseek-ai/DeepSeek-V3 using mlx-lm version 0.20.4.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/DeepSeek-V3-3bit")

prompt="hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)