Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
iproskurina
's Collections
Quantized LLMs
LMs + Topological Data Analysis🌌
LMs for French 🥐
BabyLMs 🧸
Quantized LLMs
updated
Nov 12
LLMs quantized with GPTQ
Upvote
-
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g128
Text Generation
•
Updated
Sep 25
•
20
iproskurina/bloom-7b1-GPTQ-4bit-g128
Text Generation
•
Updated
Sep 24
•
36
•
2
iproskurina/bloom-1b7-GPTQ-4bit-g128
Text Generation
•
Updated
Sep 24
•
30
iproskurina/bloom-3b-GPTQ-4bit-g128
Text Generation
•
Updated
Sep 24
•
26
iproskurina/bloom-560m-GPTQ-4bit-g128
Text Generation
•
Updated
Sep 24
•
26
iproskurina/bloom-1b1-GPTQ-4bit-g128
Text Generation
•
Updated
Sep 24
•
26
iproskurina/opt-2.7b-GPTQ-4bit-g128
Text Generation
•
Updated
Sep 24
•
44
iproskurina/opt-13b-GPTQ-4bit-g128
Text Generation
•
Updated
Sep 24
•
37
iproskurina/opt-6.7b-GPTQ-4bit-g128
Text Generation
•
Updated
Sep 24
•
38
iproskurina/opt-125m-GPTQ-4bit-g128
Text Generation
•
Updated
Sep 24
•
6
iproskurina/opt-350m-GPTQ-4bit-g128
Text Generation
•
Updated
Sep 24
•
29
iproskurina/opt-1.3b-GPTQ-4bit-g128
Text Generation
•
Updated
Sep 24
•
28
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g128
Text Generation
•
Updated
Sep 24
•
9
iproskurina/Mistral-7B-v0.3-GPTQ-8bit-g128
Text Generation
•
Updated
Sep 25
•
11
iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g64
Text Generation
•
Updated
Sep 24
•
12
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g64
Text Generation
•
Updated
Sep 24
•
11
iproskurina/Mistral-7B-v0.1-GPTQ-4bit-g128
Text Generation
•
Updated
Sep 24
•
11
iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g128
Text Generation
•
Updated
Sep 24
•
9
TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
Text Generation
•
Updated
Sep 29, 2023
•
116k
•
78
TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
Text Generation
•
Updated
Dec 11, 2023
•
519k
•
50
TheBloke/bloomz-176B-GPTQ
Text Generation
•
Updated
Jul 7, 2023
•
17
•
20
TheBloke/BLOOMChat-176B-v1-GPTQ
Text Generation
•
Updated
Jul 7, 2023
•
15
•
31
TheBloke/Llama-2-13B-chat-GPTQ
Text Generation
•
Updated
Sep 27, 2023
•
29.1k
•
362
When Quantization Affects Confidence of Large Language Models?
Paper
•
2405.00632
•
Published
May 1
Upvote
-
Share collection
View history
Collection guide
Browse collections