Text Generation
GGUF
English
code
text-generation-inference
beowolx commited on
Commit
40e5d59
1 Parent(s): 30331e5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +94 -0
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - glaiveai/glaive-code-assistant-v2
5
+ - TokenBender/code_instructions_122k_alpaca_style
6
+ language:
7
+ - en
8
+ metrics:
9
+ - code_eval
10
+ pipeline_tag: text-generation
11
+ tags:
12
+ - code
13
+ - text-generation-inference
14
+ ---
15
+
16
+ <p align="center">
17
+ <img width="700px" alt="DeepSeek Coder" src="https://cdn-uploads.huggingface.co/production/uploads/64b566ab04fa6584c03b5247/5COagfF6EwrV4utZJ-ClI.png">
18
+ </p>
19
+ <hr>
20
+
21
+ # CodeNinja: Your Advanced Coding Assistant
22
+
23
+ ## Overview
24
+
25
+ CodeNinja is an enhanced version of the renowned model [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210). It having been fine-tuned through Supervised Fine Tuning on two expansive datasets, encompassing over 400,000 coding instructions. Designed to be an indispensable tool for coders, CodeNinja aims to integrate seamlessly into your daily coding routine.
26
+
27
+
28
+ ### Key Features
29
+
30
+ - **Expansive Training Database**: CodeNinja has been refined with datasets from [glaiveai/glaive-code-assistant-v2](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v2) and [TokenBender/code_instructions_122k_alpaca_style](https://huggingface.co/datasets/TokenBender/code_instructions_122k_alpaca_style), incorporating around 400,000 coding instructions across various languages including Python, C, C++, Rust, Java, JavaScript, and more.
31
+
32
+ - **Flexibility and Scalability**: Available in a 7B model size, CodeNinja is adaptable for local runtime environments.
33
+
34
+ - **Advanced Code Completion**: With a substantial context window size of 8192, it supports comprehensive project-level code completion.
35
+
36
+ ## Prompt Format
37
+
38
+ CodeNinja maintains the same prompt structure as OpenChat 3.5. Effective utilization requires adherence to this format:
39
+
40
+ ```
41
+ GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:
42
+ ```
43
+
44
+ 🚨 Important: Ensure the use of `<|end_of_turn|>` as the end-of-generation token.
45
+
46
+ **Adhering to this format is crucial for optimal results.**
47
+
48
+ ## Usage Instructions
49
+
50
+ ### Using LM Studio
51
+
52
+ The simplest way to engage with CodeNinja is via the [quantized versions](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF) on [LM Studio](https://lmstudio.ai/). Ensure you select the "OpenChat" preset, which incorporates the necessary prompt format. The preset is also available in this [gist](https://gist.github.com/beowolx/b219466681c02ff67baf8f313a3ad817).
53
+
54
+ ### Using the Transformers Library
55
+
56
+ ```python
57
+ from transformers import AutoTokenizer, AutoModelForCausalLM
58
+ import torch
59
+
60
+ # Initialize the model
61
+ model_path = "beowolx/CodeNinja-1.0-OpenChat-7B"
62
+ model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
63
+ # Load the OpenChat tokenizer
64
+ tokenizer = AutoTokenizer.from_pretrained("openchat/openchat-3.5-1210", use_fast=True)
65
+
66
+ def generate_one_completion(prompt: str):
67
+ messages = [
68
+ {"role": "user", "content": prompt},
69
+ {"role": "assistant", "content": ""} # Model response placeholder
70
+ ]
71
+
72
+ # Generate token IDs using the chat template
73
+ input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
74
+
75
+ # Produce completion
76
+ generate_ids = model.generate(
77
+ torch.tensor([input_ids]).to("cuda"),
78
+ max_length=256,
79
+ pad_token_id=tokenizer.pad_token_id,
80
+ eos_token_id=tokenizer.eos_token_id
81
+ )
82
+
83
+ # Process the completion
84
+ completion = tokenizer.decode(generate_ids[0], skip_special_tokens=True)
85
+ completion = completion.split("\n\n\n")[0].strip()
86
+
87
+ return completion
88
+ ```
89
+
90
+ ## License
91
+ CodeNinja is licensed under the MIT License, with model usage subject to the Model License.
92
+
93
+ ## Contact
94
+ For queries or support, please open an issue in the repository.