munish0838 commited on
Commit
ae09d1c
1 Parent(s): b1cfeaa

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - it
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ base_model: DeepMount00/Qwen2-1.5B-Ita
8
+ ---
9
+
10
+ # QuantFactory/Qwen2-1.5B-Ita-GGUF
11
+ This is quantized version of [DeepMount00/Qwen2-1.5B-Ita](https://huggingface.co/DeepMount00/Qwen2-1.5B-Ita) created using llama.cpp
12
+
13
+ # Model Description
14
+ ### Qwen2 1.5B: Almost the Same Performance as ITALIA (iGenius) but 6 Times Smaller 🚀
15
+
16
+ ### Model Overview
17
+
18
+ **Model Name:** Qwen2 1.5B Fine-tuned for Italian Language
19
+ **Version:** 1.5b
20
+ **Model Type:** Language Model
21
+ **Parameter Count:** 1.5 billion
22
+ **Language:** Italian
23
+ **Comparable Model:** [ITALIA by iGenius](https://huggingface.co/iGeniusAI) (9 billion parameters)
24
+
25
+ ### Model Description
26
+
27
+ Qwen2 1.5B is a compact language model specifically fine-tuned for the Italian language. Despite its relatively small size of 1.5 billion parameters, Qwen2 1.5B demonstrates strong performance, nearly matching the capabilities of larger models, such as the **9 billion parameter ITALIA model by iGenius**. The fine-tuning process focused on optimizing the model for various language tasks in Italian, making it highly efficient and effective for Italian language applications.
28
+
29
+ ### Performance Evaluation
30
+
31
+ The performance of Qwen2 1.5B was evaluated on several benchmarks and compared against the ITALIA model. The results are as follows:
32
+ Sure, here are the results in a markdown table:
33
+
34
+ ### Performance Evaluation
35
+
36
+ | Model | Parameters | Average | MMLU | ARC | HELLASWAG |
37
+ |------------|-------------|---------|-------|-------|-----------|
38
+ | ITALIA | 9B | 43.5 | 35.22 | 38.49 | 56.79 |
39
+ | Qwen2 1.5B | 1.5B | 43.18 | 49.04 | 33.06 | 47.13 |
40
+
41
+
42
+ ### Conclusion
43
+
44
+ Qwen2 1.5B demonstrates that a smaller, more efficient model can achieve performance levels comparable to much larger models. It excels in the MMLU benchmark, showing its strength in multitask language understanding. While it scores slightly lower in the ARC and HELLASWAG benchmarks, its overall performance makes it a viable option for Italian language tasks, offering a balance between efficiency and capability.