You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

This model is exclusively available to paid subscribers of The Kaitchup. To gain access, subscribe to The Kaitchup for either a monthly or yearly paid plan. Once subscribed, you will receive an access token by email and will have access to all the models listed on this page.

Log in or Sign Up to review the conditions and access this model content.

Model Details

This is meta-llama/Meta-Llama-3-8B quantized with AutoRound and serialized with the GPTQ format in 4-bit. The model has been created, tested, and evaluated by The Kaitchup.

Details on the AutoRound quantization process and how to use the model here: Intel AutoRound: Accurate Low-bit Quantization for LLMs

  • Developed by: The Kaitchup
  • Language(s) (NLP): English
  • License: cc-by-4.0
Downloads last month
0
Safetensors
Model size
1.99B params
Tensor type
FP16
·
I32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including kaitchup/Llama-3-8B-4bit-AutoRound-GPTQ