Text Generation
Transformers
PyTorch
English
mistral
conversational
Inference Endpoints
text-generation-inference
teknium commited on
Commit
011b872
1 Parent(s): e5d36fb

Fix EOS Token

Browse files

The model has it's own setting for EOS, which is not equivalent to tokenizer setting. This fixed the issue for me.

Files changed (1) hide show
  1. config.json +1 -1
config.json CHANGED
@@ -4,7 +4,7 @@
4
  "MistralForCausalLM"
5
  ],
6
  "bos_token_id": 1,
7
- "eos_token_id": 2,
8
  "hidden_act": "silu",
9
  "hidden_size": 4096,
10
  "initializer_range": 0.02,
 
4
  "MistralForCausalLM"
5
  ],
6
  "bos_token_id": 1,
7
+ "eos_token_id": 32000,
8
  "hidden_act": "silu",
9
  "hidden_size": 4096,
10
  "initializer_range": 0.02,