prince-canuma's picture
Upload 7 files
75d0d75 verified
|
raw
history blame
1.31 kB
metadata
language:
  - en
license: other
library_name: transformers
tags:
  - causal-lm
  - code
  - mlx
metrics:
  - code_eval
model-index:
  - name: stabilityai/stable-code-instruct-3b
    results:
      - task:
          type: text-generation
        dataset:
          name: MultiPL-HumanEval (Python)
          type: nuprl/MultiPL-E
        metrics:
          - type: pass@1
            value: 32.4
            name: pass@1
            verified: false
          - type: pass@1
            value: 30.9
            name: pass@1
            verified: false
          - type: pass@1
            value: 32.1
            name: pass@1
            verified: false
          - type: pass@1
            value: 32.1
            name: pass@1
            verified: false
          - type: pass@1
            value: 24.2
            name: pass@1
            verified: false
          - type: pass@1
            value: 23
            name: pass@1
            verified: false

mlx-community/stable-code-instruct-3b-4bit

This model was converted to MLX format from stabilityai/stable-code-instruct-3b using mlx-lm version 0.4.0. Refer to the original model card for more details on the model.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/stable-code-instruct-3b-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)