Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
XelotX
/
Meta-Llama-3-8B-Instruct-XelotX-BF16_iQuants
like
0
GGUF
imatrix
conversational
Model card
Files
Files and versions
Community
Deploy
Use this model
No model card
Downloads last month
406
GGUF
Model size
8.03B params
Architecture
llama
Chat template
Hardware compatibility
Log In
to view the estimation
1-bit
IQ1_S
2.02 GB
IQ1_M
2.16 GB
2-bit
IQ2_XXS
2.4 GB
IQ2_XS
2.61 GB
IQ2_S
2.76 GB
Q2_K_S
2.99 GB
Q2_K
3.18 GB
3-bit
IQ3_XXS
3.27 GB
IQ3_S
3.68 GB
Q3_K_S
3.66 GB
Q3_K_M
4.02 GB
Q3_K_L
4.32 GB
4-bit
IQ4_XS
4.45 GB
Q4_K_S
4.69 GB
IQ4_NL
4.68 GB
Q4_0
4.68 GB
Q4_1
5.13 GB
Q4_K_M
4.92 GB
5-bit
Q5_K_S
5.6 GB
Q5_0
5.61 GB
Q5_1
6.07 GB
Q5_K_M
5.73 GB
6-bit
Q6_K
6.6 GB
8-bit
Q8_0
8.54 GB
View +3 variants
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.
Collection including
XelotX/Meta-Llama-3-8B-Instruct-XelotX-BF16_iQuants
Meta-Llama-3(.1-3)
Collection
18 items
โข
Updated
Jan 20