-
-
-
-
-
-
Active filters:
gptq
TheBloke/saiga_mistral_7b-GPTQ
Text Generation
•
Updated
•
292
•
8
TheBloke/deepseek-llm-7B-chat-GPTQ
Text Generation
•
Updated
•
466
•
1
TheBloke/dolphin-2.5-mixtral-8x7b-GPTQ
Text Generation
•
Updated
•
133
•
109
astronomer/Llama-3-8B-Instruct-GPTQ-4-Bit
Text Generation
•
Updated
•
10.7k
•
25
neuralmagic/Mistral-7B-Instruct-v0.3-GPTQ-4bit
Text Generation
•
Updated
•
1.34k
•
18
allganize/Llama-3-Alpha-Ko-8B-Instruct-marlin
Text Generation
•
Updated
•
16
•
5
Qwen/Qwen2-7B-Instruct-GPTQ-Int4
Text Generation
•
Updated
•
2.25k
•
24
neuralmagic/Meta-Llama-3-70B-Instruct-quantized.w8a16
Text Generation
•
Updated
•
498
•
4
pentagoniac/SEMIKONG-8b-GPTQ
Text Generation
•
Updated
•
912
•
26
ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation
•
Updated
•
1.32k
•
4
neuralmagic/Meta-Llama-3.1-8B-Instruct-quantized.w4a16
Text Generation
•
Updated
•
144k
•
24
neuralmagic/Meta-Llama-3.1-70B-Instruct-quantized.w4a16
Text Generation
•
Updated
•
20.4k
•
30
Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int4
Image-Text-to-Text
•
Updated
•
105k
•
31
Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int8
Image-Text-to-Text
•
Updated
•
7.19k
•
26
Qwen/Qwen2-VL-2B-Instruct-GPTQ-Int4
Image-Text-to-Text
•
Updated
•
598k
•
20
Qwen/Qwen2-VL-2B-Instruct-GPTQ-Int8
Image-Text-to-Text
•
Updated
•
4.15k
•
13
Qwen/Qwen2-VL-72B-Instruct-GPTQ-Int4
Image-Text-to-Text
•
Updated
•
135k
•
23
Qwen/Qwen2-VL-72B-Instruct-GPTQ-Int8
Image-Text-to-Text
•
Updated
•
2.19k
•
10
Qwen/Qwen2.5-0.5B-Instruct-GPTQ-Int8
Text Generation
•
Updated
•
37.3k
•
8
Qwen/Qwen2.5-7B-Instruct-GPTQ-Int8
Text Generation
•
Updated
•
24.1k
•
12
Qwen/Qwen2.5-14B-Instruct-GPTQ-Int4
Text Generation
•
Updated
•
15.3k
•
14
Qwen/Qwen2.5-32B-Instruct-GPTQ-Int4
Text Generation
•
Updated
•
44.1k
•
24
Qwen/Qwen2.5-72B-Instruct-GPTQ-Int4
Text Generation
•
Updated
•
35.5k
•
32
Qwen/Qwen2.5-72B-Instruct-GPTQ-Int8
Text Generation
•
Updated
•
4.75k
•
17
IntelLabs/sqft-phi-3.5-mini-instruct-base-gptq
Text Generation
•
Updated
•
157
•
1
Qwen/Qwen2.5-Coder-32B-Instruct-GPTQ-Int8
Text Generation
•
Updated
•
15.2k
•
16
Qwen/Qwen2.5-Coder-32B-Instruct-GPTQ-Int4
Text Generation
•
Updated
•
25k
•
12
ModelCloud/Qwen2.5-Coder-32B-Instruct-gptqmodel-4bit-vortex-v1
Text Generation
•
Updated
•
185
•
13
Xu-Ouyang/FloatLM_2.4B-int2-GPTQ-wikitext2
Text Generation
•
Updated
•
85
•
1
Almheiri/Llama-3.2-1B-Instruct-GPTQ-INT4
Updated
•
52
•
1