Instructions to use mlx-community/gemma-4-e2b-it-bf16 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use mlx-community/gemma-4-e2b-it-bf16 with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir gemma-4-e2b-it-bf16 mlx-community/gemma-4-e2b-it-bf16
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- Xet hash:
- c62336ad134cad6f154d84eb0e5a5fa9ca17cd665ef3ba5ac4fd02b1486760b4
- Size of remote file:
- 32.2 MB
- SHA256:
- cc8d3a0ce36466ccc1278bf987df5f71db1719b9ca6b4118264f45cb627bfe0f
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.