Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
sairamn
/
gemma-7b-q
like
0
GGUF
Model card
Files
Files and versions
Community
Deploy
Use this model
No model card
Downloads last month
25
GGUF
Model size
8.54B params
Architecture
gemma
Hardware compatibility
Log In
to view the estimation
4-bit
Q4_K_M
5.33 GB
16-bit
F16
17.1 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.
Collection including
sairamn/gemma-7b-q
Finetuned Gemma Models
Collection
3 items
โข
Updated
Feb 27
โข
2