Attention graph features extracted from LMs fine-tuned on linguistic acceptability corpora
Irina Proskurina
iproskurina
AI & ML interests
LLMs: quantization, pre-training
Organizations
Collections
3
models
38
iproskurina/Mistral-7B-v0.3-gptq-3bit
Text Generation
•
Updated
•
18
iproskurina/Mistral-7B-v0.3-GPTQ-8bit-g128
Text Generation
•
Updated
•
50
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g128
Text Generation
•
Updated
•
133
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g64
Text Generation
•
Updated
•
17
iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g64
Text Generation
•
Updated
•
15
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g128
Text Generation
•
Updated
•
15
iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g128
Text Generation
•
Updated
•
13
iproskurina/Mistral-7B-v0.1-GPTQ-4bit-g128
Text Generation
•
Updated
•
12
iproskurina/opt-13b-GPTQ-4bit-g128
Text Generation
•
Updated
•
190
iproskurina/opt-2.7b-GPTQ-4bit-g128
Text Generation
•
Updated
•
34