Update README.md
Browse files
README.md
CHANGED
|
@@ -49,7 +49,7 @@ print(response)
|
|
| 49 |
## Evaluation Results
|
| 50 |
Both Kumru-7B and Kumru-2B are evaluated on Cetvel benchmark.
|
| 51 |
|
| 52 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/6147363543eb04c443cd4e39/
|
| 53 |
|
| 54 |
We observe that Kumru overall surpasses significantly larger models such as LLaMA-3.3–70B, Gemma-3–27B, Qwen-2–72B and Aya-32B. It excels at tasks related to the nuances of the Turkish language, such as grammatical error correction and text summarization.
|
| 55 |
|
|
|
|
| 49 |
## Evaluation Results
|
| 50 |
Both Kumru-7B and Kumru-2B are evaluated on Cetvel benchmark.
|
| 51 |
|
| 52 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/6147363543eb04c443cd4e39/eu2TuwVpLwRWAh3MjWc1v.png" alt="preview" width="750"/>
|
| 53 |
|
| 54 |
We observe that Kumru overall surpasses significantly larger models such as LLaMA-3.3–70B, Gemma-3–27B, Qwen-2–72B and Aya-32B. It excels at tasks related to the nuances of the Turkish language, such as grammatical error correction and text summarization.
|
| 55 |
|