license: other | |
--- | |
This is a [ggml](https://github.com/ggerganov/ggml/) quantized version of [Replit-v2-CodeInstruct-3B](https://huggingface.co/teknium/Replit-v2-CodeInstruct-3B). Quantized to 4bit -> q4_1. | |
To run inference you can use ggml directly or [ctransformers](https://github.com/marella/ctransformers). | |
- Memory usage of model: **2GB~** | |
- Repo to run the model using ctransformers: https://github.com/abacaj/replit-3B-inference |