Contextual-Obedient-MoE-3x8B-Llama3-RAG / special_tokens_map.json
TroyDoesAI's picture
Llama 3 Context-Obedient Models in a 3 x MoE configuration. The models experts are split into Understanding and summarizing input, following the format provided, and outputting only context relevant answers as the three experts involved
2ed3e35 verified
{
"bos_token": {
"content": "<|begin_of_text|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<|begin_of_text|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}