mlx-community/deepseek-vl2-8bit

This model was converted to MLX format from prince-canuma/deepseek-vl2 using mlx-vlm version 0.1.5. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/deepseek-vl2-8bit --max-tokens 100 --temp 0.0
Downloads last month
25
Safetensors
Model size
7.83B params
Tensor type
FP16
·
U32
·
Inference Examples
Inference API (serverless) does not yet support transformers models for this pipeline type.

Collection including mlx-community/deepseek-vl2-8bit