Minh2508 commited on
Commit
e32e654
·
verified ·
1 Parent(s): 172db2d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -24,7 +24,7 @@ model-index:
24
  # 🚀 Decode-12B-MoE: High-Performance Mixture of Experts Model
25
 
26
  **Decode-12B-MoE** is a Large Language Model (LLM) utilizing a **Sparse Mixture of Experts (MoE)** architecture with a total of **12.5 billion parameters**. This model is engineered to bridge the gap between massive parameter counts and computational efficiency, activating only a fraction of its weights (~2.5B) during inference.
27
-
28
  ## 📌 Technical Specifications
29
 
30
  | Attribute | Value |
 
24
  # 🚀 Decode-12B-MoE: High-Performance Mixture of Experts Model
25
 
26
  **Decode-12B-MoE** is a Large Language Model (LLM) utilizing a **Sparse Mixture of Experts (MoE)** architecture with a total of **12.5 billion parameters**. This model is engineered to bridge the gap between massive parameter counts and computational efficiency, activating only a fraction of its weights (~2.5B) during inference.
27
+ ** Untrained model! **
28
  ## 📌 Technical Specifications
29
 
30
  | Attribute | Value |