Text Generation
Transformers
Safetensors
English
mistral
code
text-generation-inference
conversational
Inference Endpoints
File size: 7,038 Bytes
93cabc6
3823f37
 
93cabc6
3823f37
 
 
44e2e64
 
 
 
 
 
3823f37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93cabc6
44e2e64
 
 
 
 
 
b1f6b06
 
 
44e2e64
9934c04
44e2e64
b1f6b06
44e2e64
b1f6b06
44e2e64
b1f6b06
44e2e64
b1f6b06
44e2e64
b1f6b06
44e2e64
b1f6b06
 
 
44e2e64
 
 
 
 
b1f6b06
44e2e64
b1f6b06
44e2e64
b1f6b06
44e2e64
b1f6b06
44e2e64
b1f6b06
44e2e64
b1f6b06
44e2e64
 
 
 
 
 
 
 
b1f6b06
44e2e64
 
 
 
 
b1f6b06
44e2e64
 
b1f6b06
44e2e64
 
b1f6b06
44e2e64
b1f6b06
44e2e64
 
 
 
 
b1f6b06
44e2e64
 
 
 
 
 
b1f6b06
 
44e2e64
b1f6b06
3823f37
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
---
language:
- en
license: mit
tags:
- code
- text-generation-inference
datasets:
- glaiveai/glaive-code-assistant-v2
- TokenBender/code_instructions_122k_alpaca_style
metrics:
- code_eval
pipeline_tag: text-generation
model-index:
- name: CodeNinja-1.0-OpenChat-7B
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: IFEval (0-Shot)
      type: HuggingFaceH4/ifeval
      args:
        num_few_shot: 0
    metrics:
    - type: inst_level_strict_acc and prompt_level_strict_acc
      value: 54.47
      name: strict accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=beowolx/CodeNinja-1.0-OpenChat-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: BBH (3-Shot)
      type: BBH
      args:
        num_few_shot: 3
    metrics:
    - type: acc_norm
      value: 21.71
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=beowolx/CodeNinja-1.0-OpenChat-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MATH Lvl 5 (4-Shot)
      type: hendrycks/competition_math
      args:
        num_few_shot: 4
    metrics:
    - type: exact_match
      value: 5.21
      name: exact match
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=beowolx/CodeNinja-1.0-OpenChat-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GPQA (0-shot)
      type: Idavidrein/gpqa
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 5.93
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=beowolx/CodeNinja-1.0-OpenChat-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MuSR (0-shot)
      type: TAUR-Lab/MuSR
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 11.54
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=beowolx/CodeNinja-1.0-OpenChat-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU-PRO (5-shot)
      type: TIGER-Lab/MMLU-Pro
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 22.39
      name: accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=beowolx/CodeNinja-1.0-OpenChat-7B
      name: Open LLM Leaderboard
---

<p align="center">
<img width="700px" alt="DeepSeek Coder" src="https://cdn-uploads.huggingface.co/production/uploads/64b566ab04fa6584c03b5247/5COagfF6EwrV4utZJ-ClI.png">
</p>
<hr>

# CodeNinja: Your Advanced Coding Assistant

## Overview

CodeNinja is an enhanced version of the renowned model [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210). It having been fine-tuned through Supervised Fine Tuning on two expansive datasets, encompassing over 400,000 coding instructions. Designed to be an indispensable tool for coders, CodeNinja aims to integrate seamlessly into your daily coding routine.

Discover the quantized versions at: [beowolx/CodeNinja-1.0-OpenChat-7B-GGUF](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF).

### Key Features

- **Expansive Training Database**: CodeNinja has been refined with datasets from [glaiveai/glaive-code-assistant-v2](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v2) and [TokenBender/code_instructions_122k_alpaca_style](https://huggingface.co/datasets/TokenBender/code_instructions_122k_alpaca_style), incorporating around 400,000 coding instructions across various languages including Python, C, C++, Rust, Java, JavaScript, and more.

- **Flexibility and Scalability**: Available in a 7B model size, CodeNinja is adaptable for local runtime environments.

- **Advanced Code Completion**: With a substantial context window size of 8192, it supports comprehensive project-level code completion.

## Prompt Format

CodeNinja maintains the same prompt structure as OpenChat 3.5. Effective utilization requires adherence to this format:

```
GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:
```

🚨 Important: Ensure the use of `<|end_of_turn|>` as the end-of-generation token.

**Adhering to this format is crucial for optimal results.**

## Usage Instructions

### Using LM Studio

The simplest way to engage with CodeNinja is via the [quantized versions](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF) on [LM Studio](https://lmstudio.ai/). Ensure you select the "OpenChat" preset, which incorporates the necessary prompt format. The preset is also available in this [gist](https://gist.github.com/beowolx/b219466681c02ff67baf8f313a3ad817).

### Using the Transformers Library

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# Initialize the model
model_path = "beowolx/CodeNinja-1.0-OpenChat-7B"
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
# Load the OpenChat tokenizer
tokenizer = AutoTokenizer.from_pretrained("openchat/openchat-3.5-1210", use_fast=True)

def generate_one_completion(prompt: str):
    messages = [
        {"role": "user", "content": prompt},
        {"role": "assistant", "content": ""}  # Model response placeholder
    ]

    # Generate token IDs using the chat template
    input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True)

    # Produce completion
    generate_ids = model.generate(
        torch.tensor([input_ids]).to("cuda"),
        max_length=256,
        pad_token_id=tokenizer.pad_token_id,
        eos_token_id=tokenizer.eos_token_id
    )

    # Process the completion
    completion = tokenizer.decode(generate_ids[0], skip_special_tokens=True)
    completion = completion.split("\n\n\n")[0].strip()

    return completion
```

## License
CodeNinja is licensed under the MIT License, with model usage subject to the Model License.

## Contact
For queries or support, please open an issue in the repository.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beowolx__CodeNinja-1.0-OpenChat-7B)

|      Metric       |Value|
|-------------------|----:|
|Avg.               |20.21|
|IFEval (0-Shot)    |54.47|
|BBH (3-Shot)       |21.71|
|MATH Lvl 5 (4-Shot)| 5.21|
|GPQA (0-shot)      | 5.93|
|MuSR (0-shot)      |11.54|
|MMLU-PRO (5-shot)  |22.39|