Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ lm_studio:
|
|
7 |
use_case: coding
|
8 |
release_date: 04-09-2024
|
9 |
model_creator: 01-ai
|
10 |
-
prompt_template:
|
11 |
base_model: Yi
|
12 |
original_repo: 01-ai/Yi-Coder-9B-Chat
|
13 |
---
|
@@ -17,20 +17,37 @@ lm_studio:
|
|
17 |
|
18 |
**Model creator:** [01-ai](https://huggingface.co/01-ai)<br>
|
19 |
**Original model**: [Yi-Coder-9B-Chat](https://huggingface.co/01-ai/Yi-Coder-9B-Chat)<br>
|
20 |
-
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [
|
21 |
|
22 |
## Model Summary:
|
23 |
|
24 |
-
|
|
|
|
|
25 |
|
|
|
26 |
|
|
|
27 |
Under the hood, the model will see a prompt that's formatted like so:
|
28 |
|
29 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
```
|
31 |
|
32 |
## Technical Details
|
33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
## Special thanks
|
35 |
|
36 |
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
|
|
7 |
use_case: coding
|
8 |
release_date: 04-09-2024
|
9 |
model_creator: 01-ai
|
10 |
+
prompt_template: ChatML
|
11 |
base_model: Yi
|
12 |
original_repo: 01-ai/Yi-Coder-9B-Chat
|
13 |
---
|
|
|
17 |
|
18 |
**Model creator:** [01-ai](https://huggingface.co/01-ai)<br>
|
19 |
**Original model**: [Yi-Coder-9B-Chat](https://huggingface.co/01-ai/Yi-Coder-9B-Chat)<br>
|
20 |
+
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3658](https://github.com/ggerganov/llama.cpp/releases/tag/b3658)<br>
|
21 |
|
22 |
## Model Summary:
|
23 |
|
24 |
+
Yi Coder 9B Chat is a new coding model from Yi, supporting a staggering 52 programming language, and featuring a max context length of 128k, making it great for ingesting large codebases.<br>
|
25 |
+
This model is tuned for chatting, not auto completion, so should be chatted with for programming questions.<br>
|
26 |
+
It is the first model under 10B parameters to pass 20% on LiveCodeBench.
|
27 |
|
28 |
+
## Prompt Template:
|
29 |
|
30 |
+
Choose the `ChatML` preset in your LM Studio.
|
31 |
Under the hood, the model will see a prompt that's formatted like so:
|
32 |
|
33 |
```
|
34 |
+
<|im_start|>system
|
35 |
+
You are a helpful assistant.<|im_end|>
|
36 |
+
<|im_start|>user
|
37 |
+
{prompt}<|im_end|>
|
38 |
+
<|im_start|>assistant
|
39 |
+
|
40 |
```
|
41 |
|
42 |
## Technical Details
|
43 |
|
44 |
+
Trained on an extensive set of languages:
|
45 |
+
```bash
|
46 |
+
'java', 'markdown', 'python', 'php', 'javascript', 'c++', 'c#', 'c', 'typescript', 'html', 'go', 'java_server_pages', 'dart', 'objective-c', 'kotlin', 'tex', 'swift', 'ruby', 'sql', 'rust', 'css', 'yaml', 'matlab', 'lua', 'json', 'shell', 'visual_basic', 'scala', 'rmarkdown', 'pascal', 'fortran', 'haskell', 'assembly', 'perl', 'julia', 'cmake', 'groovy', 'ocaml', 'powershell', 'elixir', 'clojure', 'makefile', 'coffeescript', 'erlang', 'lisp', 'toml', 'batchfile', 'cobol', 'dockerfile', 'r', 'prolog', 'verilog'
|
47 |
+
```
|
48 |
+
|
49 |
+
128k context length, achieves 23% pass rate on LiveCodeBench, surpassing even some SOTA 15-33B models.
|
50 |
+
|
51 |
## Special thanks
|
52 |
|
53 |
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|