Image-Text-to-Text
MLX
Safetensors
gemma3
multimodal
vision
quantized
pruned
mobile
on-device
apple-silicon
conversational
Instructions to use AtomGradient/gemma-3-4b-it-qat-4bit-mobile with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use AtomGradient/gemma-3-4b-it-qat-4bit-mobile with MLX:
# Make sure mlx-vlm is installed # pip install --upgrade mlx-vlm from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load the model model, processor = load("AtomGradient/gemma-3-4b-it-qat-4bit-mobile") config = load_config("AtomGradient/gemma-3-4b-it-qat-4bit-mobile") # Prepare input image = ["http://images.cocodataset.org/val2017/000000039769.jpg"] prompt = "Describe this image." # Apply chat template formatted_prompt = apply_chat_template( processor, config, prompt, num_images=1 ) # Generate output output = generate(model, processor, formatted_prompt, image) print(output) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
| { | |
| "image_seq_length": 256, | |
| "processor_class": "Gemma3Processor" | |
| } | |