Image-Text-to-Text
Transformers
Safetensors
English
idefics2
pretraining
multimodal
vision
quantized
4-bit precision
AWQ
Inference Endpoints
awq
File size: 768 Bytes
7a6f14b
 
527660e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83af3b1
 
527660e
 
 
 
 
 
 
 
 
7a6f14b
527660e
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
license: apache-2.0
datasets:
- HuggingFaceM4/OBELICS
- laion/laion-coco
- wikipedia
- facebook/pmd
- pixparse/idl-wds
- pixparse/pdfa-eng-wds
- wendlerc/RenderedText
- HuggingFaceM4/the_cauldron
- teknium/OpenHermes-2.5
- GAIR/lima
- databricks/databricks-dolly-15k
- meta-math/MetaMathQA
- TIGER-Lab/MathInstruct
- microsoft/orca-math-word-problems-200k
- camel-ai/math
- AtlasUnified/atlas-math-sets
- tiedong/goat
- Lin-Chen/ShareGPT4V
- jxu124/llava_conversation_58k
language:
- en
tags:
- multimodal
- vision
- image-text-to-text
- quantized
- 4-bit
- AWQ
---

4-bit AWQ-quantized version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b). Refer to the original model's card for more information (including inference snippet).