mlx-community/deepseek-vl-1.3b-4bit
This model was converted to MLX format from deepseek-ai/deepseek-vl-1.3b-chat
using mlx-vlm version 0.0.8.
Refer to the original model card for more details on the model.
Use with mlx
pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/deepseek-vl-1.3b-4bit --max-tokens 100 --temp 0.0
- Downloads last month
- 9
Inference API (serverless) does not yet support mlx models for this pipeline type.