Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
legraphista
/
Meta-Llama-3-70B-Instruct-abliterated-v3.5-IMat-GGUF
like
2
Text Generation
GGUF
quantized
GGUF
imatrix
quantization
imat
static
16bit
8bit
6bit
5bit
4bit
3bit
2bit
1bit
conversational
License:
llama3
Model card
Files
Files and versions
Community
2
Use this model
Are QK and IQ quantizations made from the F16 or BF16 Gguf?
#2
by
Nexesenex
- opened
Jul 18
Discussion
Nexesenex
Jul 18
It's not precised on the names. ^^
See translation
Edit
Preview
Upload images, audio, and videos by dragging in the text input, pasting, or
clicking here
.
Tap or paste here to upload images
Comment
·
Sign up
or
log in
to comment