TheBloke commited on
Commit
ae12f5d
1 Parent(s): e6f26d0

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -10
README.md CHANGED
@@ -6,7 +6,7 @@ license_link: LICENSE
6
  license_name: deepseek
7
  model_creator: DeepSeek
8
  model_name: Deepseek Coder 6.7B Instruct
9
- model_type: llama
10
  prompt_template: 'You are an AI programming assistant, utilizing the Deepseek Coder
11
  model, developed by Deepseek Company, and you only answer questions related to computer
12
  science. For politically sensitive questions, security and privacy issues, and other
@@ -74,15 +74,8 @@ You are an AI programming assistant, utilizing the Deepseek Coder model, develop
74
  ```
75
 
76
  <!-- prompt-template end -->
77
- <!-- licensing start -->
78
- ## Licensing
79
 
80
- The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
81
 
82
- As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
83
-
84
- In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [DeepSeek's Deepseek Coder 6.7B Instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct).
85
- <!-- licensing end -->
86
 
87
  <!-- README_GPTQ.md-compatible clients start -->
88
  ## Known compatible clients / servers
@@ -391,9 +384,9 @@ And thank you again to a16z for their generous grant.
391
 
392
  ### 1. Introduction of Deepseek Coder
393
 
394
- Deepseek Coder comprises a series of code language models trained on both 87% code and 13% natural language in English and Chinese, with each model pre-trained on 2T tokens. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
395
 
396
- - **Massive Training Data**: Trained on 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
397
 
398
  - **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
399
 
 
6
  license_name: deepseek
7
  model_creator: DeepSeek
8
  model_name: Deepseek Coder 6.7B Instruct
9
+ model_type: deepseek
10
  prompt_template: 'You are an AI programming assistant, utilizing the Deepseek Coder
11
  model, developed by Deepseek Company, and you only answer questions related to computer
12
  science. For politically sensitive questions, security and privacy issues, and other
 
74
  ```
75
 
76
  <!-- prompt-template end -->
 
 
77
 
 
78
 
 
 
 
 
79
 
80
  <!-- README_GPTQ.md-compatible clients start -->
81
  ## Known compatible clients / servers
 
384
 
385
  ### 1. Introduction of Deepseek Coder
386
 
387
+ Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
388
 
389
+ - **Massive Training Data**: Trained from scratch fon 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
390
 
391
  - **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
392