Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
LnL-AI
/
dbrx-base-converted-v2-4bit-gptq-marlin-v2
like
1
Text Generation
Transformers
dbrx
custom_code
Inference Endpoints
text-generation-inference
4-bit precision
gptq
arxiv:
2211.15841
arxiv:
2304.11277
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
ee68d75
dbrx-base-converted-v2-4bit-gptq-marlin-v2
1 contributor
History:
4 commits
Qubitium
Upload gptq_model-4bit-128g.safetensors.partab with huggingface_hub
ee68d75
verified
3 months ago
.gitattributes
1.67 kB
Upload gptq_model-4bit-128g.safetensors.partab with huggingface_hub
3 months ago
gptq_model-4bit-128g.safetensors.partab
8 GB
LFS
Upload gptq_model-4bit-128g.safetensors.partab with huggingface_hub
3 months ago
gptq_model-4bit-128g.safetensors.partae
8 GB
LFS
Upload gptq_model-4bit-128g.safetensors.partae with huggingface_hub
3 months ago
tiktoken.py
17 kB
Upload tiktoken.py with huggingface_hub
3 months ago