Description
This is a 4-bit GPTQ quantized model of failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
It is quantized with wikitext2
dataset.
Its file size is 5.73 GB, and can fit into a 8GB VRAM GPU.
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.