GGUF model for Replit Code V-1.5 3B [https://huggingface.co/replit/replit-code-v1_5-3b]

Works with llama.cpp [https://github.com/ggerganov/llama.cpp]

Downloads last month
35
GGUF
Model size
3.32B params
Architecture
mpt
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support