Update README.md
Browse files
README.md
CHANGED
|
@@ -24,7 +24,7 @@ model-index:
|
|
| 24 |
# 🚀 Decode-12B-MoE: High-Performance Mixture of Experts Model
|
| 25 |
|
| 26 |
**Decode-12B-MoE** is a Large Language Model (LLM) utilizing a **Sparse Mixture of Experts (MoE)** architecture with a total of **12.5 billion parameters**. This model is engineered to bridge the gap between massive parameter counts and computational efficiency, activating only a fraction of its weights (~2.5B) during inference.
|
| 27 |
-
|
| 28 |
## 📌 Technical Specifications
|
| 29 |
|
| 30 |
| Attribute | Value |
|
|
|
|
| 24 |
# 🚀 Decode-12B-MoE: High-Performance Mixture of Experts Model
|
| 25 |
|
| 26 |
**Decode-12B-MoE** is a Large Language Model (LLM) utilizing a **Sparse Mixture of Experts (MoE)** architecture with a total of **12.5 billion parameters**. This model is engineered to bridge the gap between massive parameter counts and computational efficiency, activating only a fraction of its weights (~2.5B) during inference.
|
| 27 |
+
** Untrained model! **
|
| 28 |
## 📌 Technical Specifications
|
| 29 |
|
| 30 |
| Attribute | Value |
|