bartowski commited on
Commit
425330d
1 Parent(s): b61c7ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -1
README.md CHANGED
@@ -12,7 +12,7 @@ lm_studio:
12
  use_case: coding
13
  release_date: 29-05-2024
14
  model_creator: mistralai
15
- prompt_template: none
16
  system_prompt: none
17
  base_model: mistral
18
  original_repo: mistralai/Codestral-22B-v0.1
@@ -26,6 +26,33 @@ base_model: mistralai/Codestral-22B-v0.1
26
  **Original model**: [Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1)<br>
27
  **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3024](https://github.com/ggerganov/llama.cpp/releases/tag/b3024)<br>
28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  ## Special thanks
30
 
31
  🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/)
 
12
  use_case: coding
13
  release_date: 29-05-2024
14
  model_creator: mistralai
15
+ prompt_template: Mistral Instruct
16
  system_prompt: none
17
  base_model: mistral
18
  original_repo: mistralai/Codestral-22B-v0.1
 
26
  **Original model**: [Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1)<br>
27
  **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3024](https://github.com/ggerganov/llama.cpp/releases/tag/b3024)<br>
28
 
29
+ ## Model Summary:
30
+
31
+ Codestral is a brand new coding model released by the Mistral team. This 22B model is the first of its size and the first ever specialized model released by this team.<br>
32
+ Supporting both instruction prompting and popular Fill in the Middle (FIM) tokens for predictions, this model should be all around great for all your coding tasks.
33
+
34
+ ## Prompt template:
35
+
36
+ Choose the `Mistral Instruct` preset in your LM Studio.
37
+
38
+ Under the hood, the model will see a prompt that's formatted like so:
39
+
40
+ ```
41
+ <s>[INST] {prompt} [/INST]</s>
42
+ ```
43
+
44
+ This model also supports the following FIM tokens:
45
+
46
+ `<fim_prefix>`, `<fim_suffix>`, `<fim_middle>`
47
+
48
+ ## Technical Details
49
+
50
+ Codestral 22B 0.1 is trained on a dataset of 80+ programming languages including of course Python, Java, C++, Javascript, and Bash.
51
+
52
+ It supports both instruction querying as well as Fill in the Middle querying.
53
+
54
+ More details and benchmark information can be found on their blogpost here: https://mistral.ai/news/codestral/
55
+
56
  ## Special thanks
57
 
58
  🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/)