DeepMount00
commited on
Commit
โข
7a360e1
1
Parent(s):
a174bbb
Update README.md
Browse files
README.md
CHANGED
@@ -1,116 +1,61 @@
|
|
|
|
1 |
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
</style>
|
57 |
-
</head>
|
58 |
-
<body>
|
59 |
-
<div class="container">
|
60 |
-
<h1>๐ค Alireo-400M Model Card</h1>
|
61 |
-
|
62 |
-
<h2>๐ Model Description</h2>
|
63 |
-
<p>Alireo-400M is a lightweight yet powerful Italian language model with 400M parameters, designed to provide efficient natural language processing capabilities while maintaining a smaller footprint compared to larger models.</p>
|
64 |
-
|
65 |
-
<h2>โจ Key Features</h2>
|
66 |
-
<div class="features-list">
|
67 |
-
<ul>
|
68 |
-
<li>๐๏ธ <strong>Architecture:</strong> Transformer-based language model</li>
|
69 |
-
<li>๐ <strong>Parameters:</strong> 400M</li>
|
70 |
-
<li>๐ช <strong>Context Window:</strong> 8K tokens</li>
|
71 |
-
<li>๐ <strong>Training Data:</strong> Curated Italian text corpus (books, articles, web content)</li>
|
72 |
-
<li>๐พ <strong>Model Size:</strong> ~800MB</li>
|
73 |
-
</ul>
|
74 |
-
</div>
|
75 |
-
|
76 |
-
<h2>๐ Performance</h2>
|
77 |
-
<div class="performance">
|
78 |
-
<p>Despite its compact size, Alireo-400M demonstrates impressive performance:</p>
|
79 |
-
<ul>
|
80 |
-
<li>๐ Outperforms Qwen 0.5B across multiple benchmarks</li>
|
81 |
-
<li>๐ฏ Maintains high accuracy in Italian language understanding tasks</li>
|
82 |
-
<li>โก Efficient inference speed due to optimized architecture</li>
|
83 |
-
</ul>
|
84 |
-
</div>
|
85 |
-
|
86 |
-
<h2>โ ๏ธ Limitations</h2>
|
87 |
-
<div class="limitations">
|
88 |
-
<ul>
|
89 |
-
<li>Limited context window compared to larger models</li>
|
90 |
-
<li>May struggle with highly specialized technical content</li>
|
91 |
-
<li>Performance may vary on dialectal variations</li>
|
92 |
-
<li>Not suitable for multilingual tasks</li>
|
93 |
-
</ul>
|
94 |
-
</div>
|
95 |
-
|
96 |
-
<h2>๐ป Hardware Requirements</h2>
|
97 |
-
<div class="requirements">
|
98 |
-
<ul>
|
99 |
-
<li>๐ฎ <strong>Minimum RAM:</strong> 2GB</li>
|
100 |
-
<li>๐ช <strong>Recommended RAM:</strong> 4GB</li>
|
101 |
-
<li>๐จ <strong>GPU:</strong> Optional, but recommended for faster inference</li>
|
102 |
-
<li>๐ฟ <strong>Disk Space:</strong> ~1GB (including model and dependencies)</li>
|
103 |
-
</ul>
|
104 |
-
</div>
|
105 |
-
|
106 |
-
<h2>๐ License</h2>
|
107 |
-
<p>Apache 2.0</p>
|
108 |
-
|
109 |
-
<h2>๐ Citation</h2>
|
110 |
-
<div class="citation">@software{alireo2024,
|
111 |
author = {[Michele Montebovi]},
|
112 |
title = {Alireo-400M: A Lightweight Italian Language Model},
|
113 |
year = {2024},
|
114 |
-
}
|
115 |
-
|
116 |
-
</body>
|
|
|
1 |
+
# Alireo-400M Model Card ๐
|
2 |
|
3 |
+
## Model Description
|
4 |
+
Alireo-400M is a lightweight yet powerful Italian language model with 400M parameters, designed to provide efficient natural language processing capabilities while maintaining a smaller footprint compared to larger models.
|
5 |
+
|
6 |
+
## Key Features โจ
|
7 |
+
* **Architecture**: Transformer-based language model ๐๏ธ
|
8 |
+
* **Parameters**: 400M ๐
|
9 |
+
* **Context Window**: 8K tokens ๐ช
|
10 |
+
* **Training Data**: Curated Italian text corpus (books, articles, web content) ๐
|
11 |
+
* **Model Size**: ~800MB ๐พ
|
12 |
+
|
13 |
+
## Performance ๐
|
14 |
+
Despite its compact size, Alireo-400M demonstrates impressive performance:
|
15 |
+
|
16 |
+
* **Benchmark Results**: Outperforms Qwen 0.5B across multiple benchmarks ๐
|
17 |
+
* **Language Understanding**: Maintains high accuracy in Italian language understanding tasks ๐ฏ
|
18 |
+
* **Speed**: Efficient inference speed due to optimized architecture โก
|
19 |
+
|
20 |
+
## Limitations โ ๏ธ
|
21 |
+
* Limited context window compared to larger models
|
22 |
+
* May struggle with highly specialized technical content
|
23 |
+
* Performance may vary on dialectal variations
|
24 |
+
* Not suitable for multilingual tasks
|
25 |
+
|
26 |
+
## Hardware Requirements ๐ป
|
27 |
+
* **Minimum RAM**: 2GB
|
28 |
+
* **Recommended RAM**: 4GB
|
29 |
+
* **GPU**: Optional, but recommended for faster inference
|
30 |
+
* **Disk Space**: ~1GB (including model and dependencies)
|
31 |
+
|
32 |
+
## Usage Example
|
33 |
+
|
34 |
+
```python
|
35 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
36 |
+
|
37 |
+
# Load model and tokenizer
|
38 |
+
model = AutoModelForCausalLM.from_pretrained("montebovi/alireo-400m")
|
39 |
+
tokenizer = AutoTokenizer.from_pretrained("montebovi/alireo-400m")
|
40 |
+
|
41 |
+
# Example text
|
42 |
+
text = "L'intelligenza artificiale sta"
|
43 |
+
|
44 |
+
# Tokenize and generate
|
45 |
+
inputs = tokenizer(text, return_tensors="pt")
|
46 |
+
outputs = model.generate(**inputs, max_new_tokens=50)
|
47 |
+
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
48 |
+
print(result)
|
49 |
+
```
|
50 |
+
|
51 |
+
## License ๐
|
52 |
+
Apache 2.0
|
53 |
+
|
54 |
+
## Citation ๐
|
55 |
+
```bibtex
|
56 |
+
@software{alireo2024,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
57 |
author = {[Michele Montebovi]},
|
58 |
title = {Alireo-400M: A Lightweight Italian Language Model},
|
59 |
year = {2024},
|
60 |
+
}
|
61 |
+
```
|
|