Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ Deepseek Coder comprises a series of code language models trained on both 87% co
|
|
12 |
|
13 |
- **Massive Training Data**: Trained on 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
|
14 |
|
15 |
-
- **Highly Flexible & Scalable**: Offered in model sizes of
|
16 |
|
17 |
- **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks.
|
18 |
|
@@ -167,8 +167,6 @@ outputs = model.generate(**inputs, max_new_tokens=140)
|
|
167 |
print(tokenizer.decode(outputs[0]))
|
168 |
```
|
169 |
|
170 |
-
---
|
171 |
-
In the following scenario, the Deepseek-Coder 7B model effectively calls a class **IrisClassifier** and its member function from the `model.py` file, and also utilizes functions from the `utils.py` file, to correctly complete the **main** function in`main.py` file for model training and evaluation.
|
172 |
|
173 |
|
174 |
### 4. Lincense
|
|
|
12 |
|
13 |
- **Massive Training Data**: Trained on 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
|
14 |
|
15 |
+
- **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
|
16 |
|
17 |
- **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks.
|
18 |
|
|
|
167 |
print(tokenizer.decode(outputs[0]))
|
168 |
```
|
169 |
|
|
|
|
|
170 |
|
171 |
|
172 |
### 4. Lincense
|