Text Generation
GGUF
English
code
CISCai commited on
Commit
4c1ab44
1 Parent(s): a24bb6a

Added IQ4_XS

Browse files
.gitattributes CHANGED
@@ -44,3 +44,4 @@ OpenCodeInterpreter-DS-6.7B.IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
44
  OpenCodeInterpreter-DS-6.7B.IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
45
  OpenCodeInterpreter-DS-6.7B.IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
46
  OpenCodeInterpreter-DS-6.7B.imatrix-4096.dat filter=lfs diff=lfs merge=lfs -text
 
 
44
  OpenCodeInterpreter-DS-6.7B.IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
45
  OpenCodeInterpreter-DS-6.7B.IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
46
  OpenCodeInterpreter-DS-6.7B.imatrix-4096.dat filter=lfs diff=lfs merge=lfs -text
47
+ OpenCodeInterpreter-DS-6.7B.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
OpenCodeInterpreter-DS-6.7B.IQ4_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce70c09cc1ab5d6415dd24ff452cb5b4e1b6a1f7fd5853ded6cbfc8006cf9b9e
3
+ size 3621315744
README.md CHANGED
@@ -47,7 +47,7 @@ You are an AI programming assistant, utilizing the Deepseek Coder model, develop
47
  <!-- compatibility_gguf start -->
48
  ## Compatibility
49
 
50
- These quantised GGUFv3 files are compatible with llama.cpp from February 26th 2024 onwards, as of commit [a33e6a0](https://github.com/ggerganov/llama.cpp/commit/a33e6a0d2a66104ea9a906bdbf8a94d050189d91)
51
 
52
  They are also compatible with many third party UIs and libraries provided they are built using a recent llama.cpp.
53
 
@@ -67,6 +67,7 @@ The new methods available are:
67
  * GGML_TYPE_IQ3_XS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.3 bpw
68
  * GGML_TYPE_IQ3_S - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.44 bpw
69
  * GGML_TYPE_IQ3_M - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.66 bpw
 
70
 
71
  Refer to the Provided Files table below to see what files use which methods, and how.
72
  </details>
@@ -86,6 +87,7 @@ Refer to the Provided Files table below to see what files use which methods, and
86
  | [OpenCodeInterpreter-DS-6.7B.IQ3_XS.gguf](https://huggingface.co/CISCai/OpenCodeInterpreter-DS-6.7B-SOTA-GGUF/blob/main/OpenCodeInterpreter-DS-6.7B.IQ3_XS.gguf) | IQ3_XS | 3 | 2.7 GB| 4.7 GB | small, substantial quality loss |
87
  | [OpenCodeInterpreter-DS-6.7B.IQ3_S.gguf](https://huggingface.co/CISCai/OpenCodeInterpreter-DS-6.7B-SOTA-GGUF/blob/main/OpenCodeInterpreter-DS-6.7B.IQ3_S.gguf) | IQ3_S | 3 | 2.8 GB| 4.8 GB | small, greater quality loss |
88
  | [OpenCodeInterpreter-DS-6.7B.IQ3_M.gguf](https://huggingface.co/CISCai/OpenCodeInterpreter-DS-6.7B-SOTA-GGUF/blob/main/OpenCodeInterpreter-DS-6.7B.IQ3_M.gguf) | IQ3_M | 3 | 3.0 GB| 5.0 GB | medium, balanced quality - recommended |
 
89
 
90
  Generated importance matrix file: [OpenCodeInterpreter-DS-6.7B.imatrix.dat](https://huggingface.co/CISCai/OpenCodeInterpreter-DS-6.7B-SOTA-GGUF/blob/main/OpenCodeInterpreter-DS-6.7B.imatrix.dat)
91
  Generated importance matrix file (4K context): [OpenCodeInterpreter-DS-6.7B.imatrix-4096.dat](https://huggingface.co/CISCai/OpenCodeInterpreter-DS-6.7B-SOTA-GGUF/blob/main/OpenCodeInterpreter-DS-6.7B.imatrix-4096.dat)
@@ -97,7 +99,7 @@ Generated importance matrix file (4K context): [OpenCodeInterpreter-DS-6.7B.imat
97
  <!-- README_GGUF.md-how-to-run start -->
98
  ## Example `llama.cpp` command
99
 
100
- Make sure you are using `llama.cpp` from commit [a33e6a0](https://github.com/ggerganov/llama.cpp/commit/a33e6a0d2a66104ea9a906bdbf8a94d050189d91) or later.
101
 
102
  ```shell
103
  ./main -ngl 33 -m OpenCodeInterpreter-DS-6.7B.IQ3_M.gguf --color -c 16384 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.\n### Instruction:\n{prompt}\n### Response:"
 
47
  <!-- compatibility_gguf start -->
48
  ## Compatibility
49
 
50
+ These quantised GGUFv3 files are compatible with llama.cpp from February 26th 2024 onwards, as of commit [0becb22](https://github.com/ggerganov/llama.cpp/commit/0becb22ac05b6542bd9d5f2235691aa1d3d4d307)
51
 
52
  They are also compatible with many third party UIs and libraries provided they are built using a recent llama.cpp.
53
 
 
67
  * GGML_TYPE_IQ3_XS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.3 bpw
68
  * GGML_TYPE_IQ3_S - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.44 bpw
69
  * GGML_TYPE_IQ3_M - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.66 bpw
70
+ * GGML_TYPE_IQ4_XS - 4-bit quantization in super-blocks with an importance matrix applied, effectively using 4.25 bpw
71
 
72
  Refer to the Provided Files table below to see what files use which methods, and how.
73
  </details>
 
87
  | [OpenCodeInterpreter-DS-6.7B.IQ3_XS.gguf](https://huggingface.co/CISCai/OpenCodeInterpreter-DS-6.7B-SOTA-GGUF/blob/main/OpenCodeInterpreter-DS-6.7B.IQ3_XS.gguf) | IQ3_XS | 3 | 2.7 GB| 4.7 GB | small, substantial quality loss |
88
  | [OpenCodeInterpreter-DS-6.7B.IQ3_S.gguf](https://huggingface.co/CISCai/OpenCodeInterpreter-DS-6.7B-SOTA-GGUF/blob/main/OpenCodeInterpreter-DS-6.7B.IQ3_S.gguf) | IQ3_S | 3 | 2.8 GB| 4.8 GB | small, greater quality loss |
89
  | [OpenCodeInterpreter-DS-6.7B.IQ3_M.gguf](https://huggingface.co/CISCai/OpenCodeInterpreter-DS-6.7B-SOTA-GGUF/blob/main/OpenCodeInterpreter-DS-6.7B.IQ3_M.gguf) | IQ3_M | 3 | 3.0 GB| 5.0 GB | medium, balanced quality - recommended |
90
+ | [OpenCodeInterpreter-DS-6.7B.IQ4_XS.gguf](https://huggingface.co/CISCai/OpenCodeInterpreter-DS-6.7B-SOTA-GGUF/blob/main/OpenCodeInterpreter-DS-6.7B.IQ4_XS.gguf) | IQ4_XS | 4 | 3.4 GB| 5.4 GB | small, substantial quality loss |
91
 
92
  Generated importance matrix file: [OpenCodeInterpreter-DS-6.7B.imatrix.dat](https://huggingface.co/CISCai/OpenCodeInterpreter-DS-6.7B-SOTA-GGUF/blob/main/OpenCodeInterpreter-DS-6.7B.imatrix.dat)
93
  Generated importance matrix file (4K context): [OpenCodeInterpreter-DS-6.7B.imatrix-4096.dat](https://huggingface.co/CISCai/OpenCodeInterpreter-DS-6.7B-SOTA-GGUF/blob/main/OpenCodeInterpreter-DS-6.7B.imatrix-4096.dat)
 
99
  <!-- README_GGUF.md-how-to-run start -->
100
  ## Example `llama.cpp` command
101
 
102
+ Make sure you are using `llama.cpp` from commit [0becb22](https://github.com/ggerganov/llama.cpp/commit/0becb22ac05b6542bd9d5f2235691aa1d3d4d307) or later.
103
 
104
  ```shell
105
  ./main -ngl 33 -m OpenCodeInterpreter-DS-6.7B.IQ3_M.gguf --color -c 16384 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.\n### Instruction:\n{prompt}\n### Response:"