Update README.md
Browse files
README.md
CHANGED
@@ -4,58 +4,17 @@ license_name: deepseek
|
|
4 |
license_link: LICENSE
|
5 |
---
|
6 |
|
7 |
-
<p align="center">
|
8 |
-
<img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true">
|
9 |
-
</p>
|
10 |
-
<p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a> </p>
|
11 |
-
<hr>
|
12 |
|
|
|
13 |
|
|
|
14 |
|
15 |
-
|
16 |
-
### 1. Introduction of Deepseek Coder
|
17 |
-
|
18 |
-
Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
|
19 |
-
|
20 |
-
- **Massive Training Data**: Trained from scratch fon 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
|
21 |
-
|
22 |
-
- **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
|
23 |
-
|
24 |
-
- **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks.
|
25 |
-
|
26 |
-
- **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks.
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
### 2. Model Summary
|
31 |
deepseek-coder-6.7b-instruct is a 6.7B parameter model initialized from deepseek-coder-6.7b-base and fine-tuned on 2B tokens of instruction data.
|
32 |
- **Home Page:** [DeepSeek](https://deepseek.com/)
|
33 |
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder)
|
34 |
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/)
|
35 |
|
36 |
-
|
37 |
-
### 3. How to Use
|
38 |
-
Here give some examples of how to use our model.
|
39 |
-
#### Chat Model Inference
|
40 |
-
```python
|
41 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM
|
42 |
-
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True)
|
43 |
-
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True).cuda()
|
44 |
-
messages=[
|
45 |
-
{ 'role': 'user', 'content': "write a quick sort algorithm in python."}
|
46 |
-
]
|
47 |
-
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
|
48 |
-
# 32021 is the id of <|EOT|> token
|
49 |
-
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=32021)
|
50 |
-
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
|
51 |
-
```
|
52 |
-
|
53 |
-
### 4. License
|
54 |
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
|
55 |
|
56 |
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
|
57 |
-
|
58 |
-
### 5. Contact
|
59 |
-
|
60 |
-
If you have any questions, please raise an issue or contact us at [agi_code@deepseek.com](mailto:agi_code@deepseek.com).
|
61 |
-
|
|
|
4 |
license_link: LICENSE
|
5 |
---
|
6 |
|
|
|
|
|
|
|
|
|
|
|
7 |
|
8 |
+
### 1. Introduction of AutoDev Coder
|
9 |
|
10 |
+
AutoDev Coder based on deepseek-coder-6.7b-instruct with [https://huggingface.co/datasets/unit-mesh/autodev-datasets](https://huggingface.co/datasets/unit-mesh/autodev-datasets)
|
11 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
deepseek-coder-6.7b-instruct is a 6.7B parameter model initialized from deepseek-coder-6.7b-base and fine-tuned on 2B tokens of instruction data.
|
13 |
- **Home Page:** [DeepSeek](https://deepseek.com/)
|
14 |
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder)
|
15 |
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/)
|
16 |
|
17 |
+
### 2. License
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
|
19 |
|
20 |
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
|
|
|
|
|
|
|
|
|
|