Update README.md
35d904c
verified
-
1.52 kB
initial commit
-
4.04 kB
Update README.md
-
919 Bytes
1 - Epoch of Mermaid just so it thinks more step by step and understands the context better.
-
127 Bytes
Llama 3 Context-Obedient Models in a 3 x MoE configuration. The models experts are split into Understanding and summarizing input, following the format provided, and outputting only context relevant answers as the three experts involved
-
4.91 GB
1 - Epoch of Mermaid just so it thinks more step by step and understands the context better.
-
4.92 GB
1 - Epoch of Mermaid just so it thinks more step by step and understands the context better.
-
5 GB
1 - Epoch of Mermaid just so it thinks more step by step and understands the context better.
-
4.92 GB
1 - Epoch of Mermaid just so it thinks more step by step and understands the context better.
-
4.92 GB
1 - Epoch of Mermaid just so it thinks more step by step and understands the context better.
-
5 GB
1 - Epoch of Mermaid just so it thinks more step by step and understands the context better.
-
4.92 GB
1 - Epoch of Mermaid just so it thinks more step by step and understands the context better.
-
4.04 GB
1 - Epoch of Mermaid just so it thinks more step by step and understands the context better.
-
47.3 kB
Llama 3 Context-Obedient Models in a 3 x MoE configuration. The models experts are split into Understanding and summarizing input, following the format provided, and outputting only context relevant answers as the three experts involved
-
469 Bytes
Llama 3 Context-Obedient Models in a 3 x MoE configuration. The models experts are split into Understanding and summarizing input, following the format provided, and outputting only context relevant answers as the three experts involved
-
9.09 MB
Llama 3 Context-Obedient Models in a 3 x MoE configuration. The models experts are split into Understanding and summarizing input, following the format provided, and outputting only context relevant answers as the three experts involved
-
52.9 kB
Llama 3 Context-Obedient Models in a 3 x MoE configuration. The models experts are split into Understanding and summarizing input, following the format provided, and outputting only context relevant answers as the three experts involved