This is a ggml quantized version of Replit-v2-CodeInstruct-3B. Quantized to 4bit -> q4_1. To run inference you can use ggml directly or ctransformers.
- Memory usage of model: 2GB~
- Repo to run the model using ctransformers: https://github.com/abacaj/replit-3B-inference
- Downloads last month
- 9
Inference API (serverless) does not yet support model repos that contain custom code.