Edit model card

Model card of JOSIExMistral-7B-Instruct-v0.2

This is my Token customized mistralai/Mistral-7B-Instruct-v0.2 model

Origional Model

This is based on mistralai/Mistral-7B-Instruct-v0.2 model with added custom special Tokens. This wil most likely be my next Model, trained on my own Dataset.

--> GGUF Quants <--


New added Special Tokens

'<|functions|>',
'<|system|>',
'<|gökdeniz|>',
'<|user|>',
'<|josie|>',
'<|assistant|>',
'<|function_call|>',
'<|function_response|>',
'<|image|>',
'<|long_term_memory|>',
'<|short_term_memory|>',
'<|home_state|>',
'<|current_states|>',
'<|context|>',
'<|im_start|>',
'<|im_end|>'

New BOS and EOS Tokens

BOS = '<|startoftext|>'
EOS = '<|endoftext|>'

Model Architecture:

MistralForCausalLM(
  (model): MistralModel(
    (embed_tokens): Embedding(32018, 4096)
    (layers): ModuleList(
      (0-31): 32 x MistralDecoderLayer(
        (self_attn): MistralSdpaAttention(
          (q_proj): Linear(in_features=4096, out_features=4096, bias=False)
          (k_proj): Linear(in_features=4096, out_features=1024, bias=False)
          (v_proj): Linear(in_features=4096, out_features=1024, bias=False)
          (o_proj): Linear(in_features=4096, out_features=4096, bias=False)
          (rotary_emb): MistralRotaryEmbedding()
        )
        (mlp): MistralMLP(
          (gate_proj): Linear(in_features=4096, out_features=14336, bias=False)
          (up_proj): Linear(in_features=4096, out_features=14336, bias=False)
          (down_proj): Linear(in_features=14336, out_features=4096, bias=False)
          (act_fn): SiLU()
        )
        (input_layernorm): MistralRMSNorm()
        (post_attention_layernorm): MistralRMSNorm()
      )
    )
    (norm): MistralRMSNorm()
  )
  (lm_head): Linear(in_features=4096, out_features=32018, bias=False)
)

Downloads last month
3
Safetensors
Model size
7.24B params
Tensor type
FP16
·

Finetuned from