File size: 447 Bytes
1d47997
 
 
bae228c
c672fd3
 
9328566
09da5c4
3090956
f1b6814
 
1
2
3
4
5
6
7
8
9
10
11
---
license: other
---

---

This is a [ggml](https://github.com/ggerganov/ggml/) quantized version of [Replit-v2-CodeInstruct-3B](https://huggingface.co/teknium/Replit-v2-CodeInstruct-3B). Quantized to 4bit -> q4_1.
To run inference you can use ggml directly or [ctransformers](https://github.com/marella/ctransformers).

- Memory usage of model: **2GB~**
- Repo to run the model using ctransformers: https://github.com/abacaj/replit-3B-inference