|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- multimodal |
|
- vision |
|
- image-text-to-text |
|
- mlx |
|
datasets: |
|
- HuggingFaceM4/OBELICS |
|
- laion/laion-coco |
|
- wikipedia |
|
- facebook/pmd |
|
- pixparse/idl-wds |
|
- pixparse/pdfa-eng-wds |
|
- wendlerc/RenderedText |
|
- HuggingFaceM4/the_cauldron |
|
- teknium/OpenHermes-2.5 |
|
- GAIR/lima |
|
- databricks/databricks-dolly-15k |
|
- meta-math/MetaMathQA |
|
- TIGER-Lab/MathInstruct |
|
- microsoft/orca-math-word-problems-200k |
|
- camel-ai/math |
|
- AtlasUnified/atlas-math-sets |
|
- tiedong/goat |
|
--- |
|
|
|
# mlx-community/idefics2-8b-8bit |
|
This model was converted to MLX format from [`HuggingFaceM4/idefics2-8b`]() using mlx-vllm version **0.0.4**. |
|
Refer to the [original model card](https://huggingface.co/HuggingFaceM4/idefics2-8b) for more details on the model. |
|
## Use with mlx |
|
|
|
```bash |
|
pip install -U mlx-vlm |
|
``` |
|
|
|
```bash |
|
python -m mlx_vlm.generate --model mlx-community/idefics2-8b-8bit --max-tokens 100 --temp 0.0 |
|
``` |
|
|