TroyDoesAI's picture
Llama 3 Context-Obedient Models in a 3 x MoE configuration. The models experts are split into Understanding and summarizing input, following the format provided, and outputting only context relevant answers as the three experts involved
2ed3e35 verified
raw
history
9.09 MB
File too large to display, you can check the raw version instead.