bartowski commited on
Commit
5bc6817
โ€ข
1 Parent(s): 7a82521

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -45
README.md CHANGED
@@ -83,17 +83,38 @@ model-index:
83
  - type: pass@1
84
  value: 40.6
85
  quantized_by: bartowski
 
 
 
 
 
 
 
 
 
86
  ---
87
 
88
- ## Llamacpp imatrix Quantizations of starcoder2-15b-instruct-v0.1
89
 
90
- Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2756">b2756</a> for quantization.
91
 
92
- Original model: https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1
 
 
93
 
94
- All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
95
 
96
- ## Prompt format
 
 
 
 
 
 
 
 
 
 
97
 
98
  ```
99
  <|endoftext|>You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
@@ -103,60 +124,41 @@ All quants made using imatrix option with dataset provided by Kalomaze [here](ht
103
 
104
  ### Response
105
  <|endoftext|>
106
-
107
  ```
108
 
109
- Note that this model does not support a System prompt.
 
 
 
 
 
 
110
 
111
- ## Download a file (not the whole branch) from below:
112
 
113
- | Filename | Quant type | File Size | Description |
114
- | -------- | ---------- | --------- | ----------- |
115
- | [starcoder2-15b-instruct-v0.1-Q8_0.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-Q8_0.gguf) | Q8_0 | 16.96GB | Extremely high quality, generally unneeded but max available quant. |
116
- | [starcoder2-15b-instruct-v0.1-Q6_K.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-Q6_K.gguf) | Q6_K | 13.10GB | Very high quality, near perfect, *recommended*. |
117
- | [starcoder2-15b-instruct-v0.1-Q5_K_M.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-Q5_K_M.gguf) | Q5_K_M | 11.43GB | High quality, *recommended*. |
118
- | [starcoder2-15b-instruct-v0.1-Q5_K_S.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-Q5_K_S.gguf) | Q5_K_S | 11.02GB | High quality, *recommended*. |
119
- | [starcoder2-15b-instruct-v0.1-Q4_K_M.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-Q4_K_M.gguf) | Q4_K_M | 9.86GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
120
- | [starcoder2-15b-instruct-v0.1-Q4_K_S.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-Q4_K_S.gguf) | Q4_K_S | 9.16GB | Slightly lower quality with more space savings, *recommended*. |
121
- | [starcoder2-15b-instruct-v0.1-IQ4_NL.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-IQ4_NL.gguf) | IQ4_NL | 9.08GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
122
- | [starcoder2-15b-instruct-v0.1-IQ4_XS.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-IQ4_XS.gguf) | IQ4_XS | 8.59GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
123
- | [starcoder2-15b-instruct-v0.1-Q3_K_L.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-Q3_K_L.gguf) | Q3_K_L | 8.96GB | Lower quality but usable, good for low RAM availability. |
124
- | [starcoder2-15b-instruct-v0.1-Q3_K_M.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-Q3_K_M.gguf) | Q3_K_M | 8.04GB | Even lower quality. |
125
- | [starcoder2-15b-instruct-v0.1-IQ3_M.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-IQ3_M.gguf) | IQ3_M | 7.30GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
126
- | [starcoder2-15b-instruct-v0.1-IQ3_S.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-IQ3_S.gguf) | IQ3_S | 7.00GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
127
- | [starcoder2-15b-instruct-v0.1-Q3_K_S.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-Q3_K_S.gguf) | Q3_K_S | 6.98GB | Low quality, not recommended. |
128
- | [starcoder2-15b-instruct-v0.1-IQ3_XS.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-IQ3_XS.gguf) | IQ3_XS | 6.71GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
129
- | [starcoder2-15b-instruct-v0.1-IQ3_XXS.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-IQ3_XXS.gguf) | IQ3_XXS | 6.21GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
130
- | [starcoder2-15b-instruct-v0.1-Q2_K.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-Q2_K.gguf) | Q2_K | 6.19GB | Very low quality but surprisingly usable. |
131
- | [starcoder2-15b-instruct-v0.1-IQ2_M.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-IQ2_M.gguf) | IQ2_M | 5.54GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
132
- | [starcoder2-15b-instruct-v0.1-IQ2_S.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-IQ2_S.gguf) | IQ2_S | 5.14GB | Very low quality, uses SOTA techniques to be usable. |
133
- | [starcoder2-15b-instruct-v0.1-IQ2_XS.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-IQ2_XS.gguf) | IQ2_XS | 4.82GB | Very low quality, uses SOTA techniques to be usable. |
134
- | [starcoder2-15b-instruct-v0.1-IQ2_XXS.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-IQ2_XXS.gguf) | IQ2_XXS | 4.36GB | Lower quality, uses SOTA techniques to be usable. |
135
- | [starcoder2-15b-instruct-v0.1-IQ1_M.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-IQ1_M.gguf) | IQ1_M | 3.86GB | Extremely low quality, *not* recommended. |
136
- | [starcoder2-15b-instruct-v0.1-IQ1_S.gguf](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-GGUF/blob/main/starcoder2-15b-instruct-v0.1-IQ1_S.gguf) | IQ1_S | 3.55GB | Extremely low quality, *not* recommended. |
137
 
138
- ## Which file should I choose?
139
 
140
- A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
141
 
142
- The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
143
 
144
- If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
145
 
146
- If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
147
 
148
- Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
149
 
150
- If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
151
 
152
- If you want to get more into the weeds, you can check out this extremely useful feature chart:
153
 
154
- [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
155
 
156
- But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
157
 
158
- These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
159
 
160
- The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
161
 
162
- Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
 
83
  - type: pass@1
84
  value: 40.6
85
  quantized_by: bartowski
86
+ lm_studio:
87
+ param_count: 15b
88
+ use_case: coding
89
+ release_date: 30-04-2024
90
+ model_creator: BigCode
91
+ prompt_template: Starcoder2 Instruct
92
+ system_prompt: none
93
+ base_model: starcoder2
94
+ original_repo: bigcode/starcoder2-15b-instruct-v0.1
95
  ---
96
 
97
+ ## ๐Ÿ’ซ Community Model> Starcoder2 15B Instruct v0.1 by BigCode
98
 
99
+ *๐Ÿ‘พ [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
100
 
101
+ **Model creator:** [bigcode](https://huggingface.co/bigcode)<br>
102
+ **Original model**: [starcoder2-15b-instruct-v0.1](https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1)<br>
103
+ **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2756](https://github.com/ggerganov/llama.cpp/releases/tag/b2756)<br>
104
 
105
+ We introduce StarCoder2-15B-Instruct-v0.1, the very first entirely self-aligned code Large Language Model (LLM) trained with a fully permissive and transparent pipeline. Our open-source pipeline uses StarCoder2-15B to generate thousands of instruction-response pairs, which are then used to fine-tune StarCoder-15B itself without any human annotations or distilled data from huge and proprietary LLMs.
106
 
107
+
108
+ ## Model Summary:
109
+ Starcoder2-15B-Instruct-v0.1 is self-proclaimed to be the first entirely self-aligned code model with a fully permissive and transparent pipeline.<br>
110
+ This model is meant to be used for coding instructions in a <b>single turn<b>, any other styles may result in less accurate responses.<br>
111
+ Starcoder2 has been primarily finetuned for Python code generation and as such should primarily be used for Python tasks.
112
+
113
+ ## Prompt Template:
114
+
115
+ Choose the 'Starcoder2 Instruct' preset in your LM Studio.
116
+
117
+ Under the hood, the model will see a prompt that's formatted like so:
118
 
119
  ```
120
  <|endoftext|>You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
 
124
 
125
  ### Response
126
  <|endoftext|>
 
127
  ```
128
 
129
+ ## Use case and examples
130
+
131
+ This model should be used for single turn coding related instructions.
132
+
133
+ ## Coding with requirements
134
+
135
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/rNqulMDumAp7s1LdIAerC.png)
136
 
137
+ ## Creating unit tests
138
 
139
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/q_VNUflz6tcAScY_yDLet.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
140
 
141
+ ## More coding examples
142
 
 
143
 
144
+ ## Technical Details
145
 
146
+ Starcoder2 15B instruct was trained primarily on Python code generation tasks. Using Starcoder2 15B (non instruct) to generated thousands of instruction-reponse pairs, the results were used to fine tune an instruct model without human annotation or distilled data.
147
 
148
+ The dataset created is open and available: [self-oss-instruct-sc2-exec-filter-50k](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k)
149
 
150
+ And the code used to create the self-alignment has been shared here: [starcoder2-self-align](https://github.com/bigcode-project/starcoder2-self-align)
151
 
152
+ The results of the self-alignment are extremely promising, with significantly higher scores across all coding benchmarks, which is a great sign for future progress.
153
 
154
+ More details on their model card [here](https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1)
155
 
156
+ ## Special thanks
157
 
158
+ ๐Ÿ™ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
159
 
160
+ ๐Ÿ™ Special thanks to [Kalomaze](https://github.com/kalomaze) for his dataset (linked [here](https://github.com/ggerganov/llama.cpp/discussions/5263)) that was used for calculating the imatrix for these quants, which improves the overall quality!
161
 
162
+ ## Disclaimers
163
 
164
+ LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.