--- language: - en tags: - mlx datasets: - liuhaotian/LLaVA-Instruct-150K pipeline_tag: image-to-text inference: false arxiv: 2304.08485 --- # mlx-community/llava-1.5-7b-4bit This model was converted to MLX format from [`llava-hf/llava-1.5-7b-hf`]() using mlx-vlm version **0.0.4**. Refer to the [original model card](https://huggingface.co/llava-hf/llava-1.5-7b-hf) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model mlx-community/llava-1.5-7b-4bit --max-tokens 100 --temp 0.0 ```