Update README.md
Browse files
README.md
CHANGED
|
@@ -1,4 +1,13 @@
|
|
| 1 |
# Model Overview
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
### Description:
|
| 3 |
DLER-Qwen-R1-7B is an ultra-efficient 7B open-weight reasoning model designed for challenging tasks such as mathematics, programming, and scientific problem-solving. It is trained with the DLER algorithm on agentica-org/DeepScaleR-Preview-Dataset. Compared to DeepSeek’s 7B model, DLER-Qwen-R1-7B achieves substantial efficiency gains, reducing the average response length by nearly 80% across diverse mathematical benchmarks with better accuracy.
|
| 4 |
|
|
|
|
| 1 |
# Model Overview
|
| 2 |
+
<div align="center">
|
| 3 |
+
<span style="font-family: default; font-size: 1.5em;">DLER-R1-7B</span>
|
| 4 |
+
<div>
|
| 5 |
+
🚀 The leading efficient reasoning model for cutting-edge research and development 🌟
|
| 6 |
+
</div>
|
| 7 |
+
</div>
|
| 8 |
+
|
| 9 |
+

|
| 10 |
+
|
| 11 |
### Description:
|
| 12 |
DLER-Qwen-R1-7B is an ultra-efficient 7B open-weight reasoning model designed for challenging tasks such as mathematics, programming, and scientific problem-solving. It is trained with the DLER algorithm on agentica-org/DeepScaleR-Preview-Dataset. Compared to DeepSeek’s 7B model, DLER-Qwen-R1-7B achieves substantial efficiency gains, reducing the average response length by nearly 80% across diverse mathematical benchmarks with better accuracy.
|
| 13 |
|