LLMs quantized with GPTQ
Irina Proskurina
iproskurina
AI & ML interests
LLMs: quantization, pre-training
Recent Activity
updated
a model
25 days ago
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g0111-s0
published
a model
25 days ago
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g0111-s0
updated
a model
26 days ago
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g10-s0
Organizations
Collections
4
models
54

iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g0111-s0
Text Generation
•
Updated
•
12

iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g10-s0
Text Generation
•
Updated
•
23

iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g11-s0
Text Generation
•
Updated
•
17

iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g562-s0
Text Generation
•
Updated
•
6

iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g12-s0
Text Generation
•
Updated
•
12

iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g563-s0
Text Generation
•
Updated
•
9

iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g0-s0
Text Generation
•
Updated
•
15

iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g13-s0
Text Generation
•
Updated
•
14

iproskurina/opt-test
Text Generation
•
Updated
•
11

iproskurina/opt-125m-gptq2
Text Generation
•
Updated
•
12