uukuguy's picture
Update README.md
12302c9
metadata
language:
  - en
library_name: transformers
pipeline_tag: text-generation
datasets:
  - ise-uiuc/Magicoder-OSS-Instruct-75K
  - ise-uiuc/Magicoder-Evol-Instruct-110K
tags:
  - code
license: apache-2.0
model-index:
  - name: SpeechlessCoder
    results:
      - task:
          type: text-generation
        dataset:
          type: openai_humaneval
          name: HumanEval
        metrics:
          - name: pass@1
            type: pass@1
            value: null
            verified: false

speechless-coder-ds-1.3b

Use the following dataset to fine-tune deepseek-ai/deepseek-coder-1.3b in order to improve the model's reasoning and planning abilities.

context window length: 8192 max_tokens > 128 && < 8192

Total 185,193 samples 426 MB

  • ise-uiuc/Magicoder-OSS-Instruct-75K 75,186 samples
  • ise-uiuc/Magicoder-Evol-Instruct-110K 110,007 samples

50 samples/T=0.2/MaxTokens=512/Top_P=0.95

Code: https://github.com/uukuguy/speechless

How to Prompt the Model

This model accepts the Alpaca instruction format.

For example:

You are an intelligent programming assistant.

### Instruction:
Implement a linked list in C++

### Response:

HumanEval

Metric Value
humaneval-python

Big Code Models Leaderboard

CodeLlama-34B-Python: 53.29

CodeLlama-34B-Instruct: 50.79

CodeLlama-13B-Instruct: 50.6

CodeLlama-34B: 45.11

CodeLlama-13B-Python: 42.89

CodeLlama-13B: 35.07

BigCode Eval

0.205055

  • metrics_humanevalfixtests-cpp: "pass@1": 0.054878048780487805
  • metrics_humanevalfixtests-go: "pass@1": 0.054878048780487805
  • metrics_humanevalfixtests-java: "pass@1": 0.042682926829268296
  • metrics_humanevalfixtests-js: "pass@1": 0.0975609756097561
  • metrics_humanevalfixtests-python: "pass@1": 0.06707317073170732
  • metrics_humanevalfixtests-rust: "pass@1": 0.018292682926829267

0.332906

  • metrics_humanevalsynthesize-cpp: "pass@1": 0.3475609756097561
  • metrics_humanevalsynthesize-go: "pass@1": 0.25609756097560976
  • metrics_humanevalsynthesize-java: "pass@1": 0.3353658536585366
  • metrics_humanevalsynthesize-js: "pass@1": 0.35365853658536583
  • metrics_humanevalsynthesize-python: "pass@1": 0.4024390243902439
  • metrics_humanevalsynthesize-rust: "pass@1": 0.20121951219512196
  • metrics_mbpp: "pass@1": 0.434

LMEval

Open LLM Leaderboard

Metric Value
ARC
HellaSwag
MMLU
TruthfulQA
Average