Update README.md
Browse files
README.md
CHANGED
@@ -3,4 +3,7 @@ license: mit
|
|
3 |
language:
|
4 |
- en
|
5 |
base_model: meta-llama/Meta-Llama-3-8B-Instruct
|
6 |
-
---
|
|
|
|
|
|
|
|
3 |
language:
|
4 |
- en
|
5 |
base_model: meta-llama/Meta-Llama-3-8B-Instruct
|
6 |
+
---
|
7 |
+
|
8 |
+
# Llama 3 8b Instruct MOE
|
9 |
+
Llama 3 8b Instruct base model converted to MOE style by randomly partitioning the FFN layers of each decoder layer into 8 experts of the same size. Weights are directly taken from the llama3 instruct base model.
|