Text Generation
GGUF
English
nlp
code
Inference Endpoints
aberrio commited on
Commit
5e734a9
·
verified ·
1 Parent(s): 8a7a531

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -1
README.md CHANGED
@@ -41,7 +41,6 @@ The Refact 1.6B FIM GGUF model is a state-of-the-art AI-powered coding assistant
41
 
42
  The model comes in various quantized versions to suit different computational needs:
43
 
44
- - **refact-1.6B-fim-q4_0.gguf**: A 4-bit quantized model with a file size of 920 MB.
45
  - **refact-1.6B-fim-q8_0.gguf**: A 8-bit quantized model with a file size of 1.69 GB.
46
  - **refact-1.6B-fim-f16.gguf**: A half precision model with a file size of 3.17 GB.
47
 
 
41
 
42
  The model comes in various quantized versions to suit different computational needs:
43
 
 
44
  - **refact-1.6B-fim-q8_0.gguf**: A 8-bit quantized model with a file size of 1.69 GB.
45
  - **refact-1.6B-fim-f16.gguf**: A half precision model with a file size of 3.17 GB.
46