bartowski's picture
Duplicate from bartowski/stable-code-instruct-3b-GGUF
eaab160 verified
|
raw
history blame
5.49 kB
metadata
license: other
language:
  - en
tags:
  - causal-lm
  - code
metrics:
  - code_eval
library_name: transformers
model-index:
  - name: stabilityai/stable-code-instruct-3b
    results:
      - task:
          type: text-generation
        dataset:
          type: nuprl/MultiPL-E
          name: MultiPL-HumanEval (Python)
        metrics:
          - name: pass@1
            type: pass@1
            value: 32.4
            verified: false
      - task:
          type: text-generation
        dataset:
          type: nuprl/MultiPL-E
          name: MultiPL-HumanEval (C++)
        metrics:
          - name: pass@1
            type: pass@1
            value: 30.9
            verified: false
      - task:
          type: text-generation
        dataset:
          type: nuprl/MultiPL-E
          name: MultiPL-HumanEval (Java)
        metrics:
          - name: pass@1
            type: pass@1
            value: 32.1
            verified: false
      - task:
          type: text-generation
        dataset:
          type: nuprl/MultiPL-E
          name: MultiPL-HumanEval (JavaScript)
        metrics:
          - name: pass@1
            type: pass@1
            value: 32.1
            verified: false
      - task:
          type: text-generation
        dataset:
          type: nuprl/MultiPL-E
          name: MultiPL-HumanEval (PHP)
        metrics:
          - name: pass@1
            type: pass@1
            value: 24.2
            verified: false
      - task:
          type: text-generation
        dataset:
          type: nuprl/MultiPL-E
          name: MultiPL-HumanEval (Rust)
        metrics:
          - name: pass@1
            type: pass@1
            value: 23
            verified: false
quantized_by: bartowski
pipeline_tag: text-generation

Llamacpp Quantizations of stable-code-instruct-3b

Using llama.cpp release b2440 for quantization.

Original model: https://huggingface.co/stabilityai/stable-code-instruct-3b

Download a file (not the whole branch) from below:

Filename Quant type File Size Description
stable-code-instruct-3b-Q8_0.gguf Q8_0 2.97GB Extremely high quality, generally unneeded but max available quant.
stable-code-instruct-3b-Q6_K.gguf Q6_K 2.29GB Very high quality, near perfect, recommended.
stable-code-instruct-3b-Q5_K_M.gguf Q5_K_M 1.99GB High quality, very usable.
stable-code-instruct-3b-Q5_K_S.gguf Q5_K_S 1.94GB High quality, very usable.
stable-code-instruct-3b-Q5_0.gguf Q5_0 1.94GB High quality, older format, generally not recommended.
stable-code-instruct-3b-Q4_K_M.gguf Q4_K_M 1.70GB Good quality, similar to 4.25 bpw.
stable-code-instruct-3b-Q4_K_S.gguf Q4_K_S 1.62GB Slightly lower quality with small space savings.
stable-code-instruct-3b-IQ4_NL.gguf IQ4_NL 1.61GB Good quality, similar to Q4_K_S, new method of quanting,
stable-code-instruct-3b-IQ4_XS.gguf IQ4_XS 1.53GB Decent quality, new method with similar performance to Q4.
stable-code-instruct-3b-Q4_0.gguf Q4_0 1.60GB Decent quality, older format, generally not recommended.
stable-code-instruct-3b-IQ3_M.gguf IQ3_M 1.31GB Medium-low quality, new method with decent performance.
stable-code-instruct-3b-IQ3_S.gguf IQ3_S 1.25GB Lower quality, new method with decent performance, recommended over Q3 quants.
stable-code-instruct-3b-Q3_K_L.gguf Q3_K_L 1.50GB Lower quality but usable, good for low RAM availability.
stable-code-instruct-3b-Q3_K_M.gguf Q3_K_M 1.39GB Even lower quality.
stable-code-instruct-3b-Q3_K_S.gguf Q3_K_S 1.25GB Low quality, not recommended.
stable-code-instruct-3b-Q2_K.gguf Q2_K 1.08GB Extremely low quality, not recommended.

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski