Update README.md
Browse files
README.md
CHANGED
@@ -12,6 +12,8 @@ Welcome to our Code Model repository! Our model is specifically fine-tuned for c
|
|
12 |
|
13 |
### News 🔥🔥🔥
|
14 |
|
|
|
|
|
15 |
- [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
|
16 |
- [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
|
17 |
|
@@ -22,6 +24,10 @@ Welcome to our Code Model repository! Our model is specifically fine-tuned for c
|
|
22 |
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
|
23 |
</p>
|
24 |
|
|
|
|
|
|
|
|
|
25 |
For the millenial models, the eval script in the github repo is used for the above result.
|
26 |
|
27 |
Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc.
|
|
|
12 |
|
13 |
### News 🔥🔥🔥
|
14 |
|
15 |
+
- [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
|
16 |
+
- [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
|
17 |
- [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
|
18 |
- [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
|
19 |
|
|
|
24 |
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
|
25 |
</p>
|
26 |
|
27 |
+
<p align="center" width="100%">
|
28 |
+
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result-3b.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
|
29 |
+
</p>
|
30 |
+
|
31 |
For the millenial models, the eval script in the github repo is used for the above result.
|
32 |
|
33 |
Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc.
|