Text Generation
Transformers
PyTorch
English
German
mistral
conversational
Inference Endpoints
text-generation-inference
jphme commited on
Commit
966d6a5
1 Parent(s): 1feda3d

fix vocab size

Browse files

```python
from transformers import AutoTokenizer
testtokenizer=AutoTokenizer.from_pretrained("LeoLM/leo-mistral-hessianai-7b-chat")
len(testtokenizer)
# 32002
```

Leads to e.g. VLLM error:
`TypeError: argument 'tokens': 'NoneType' object cannot be converted to 'PyString'`
(see [here](https://github.com/vllm-project/vllm/issues/516#issuecomment-1657507293 ))

Files changed (1) hide show
  1. config.json +1 -1
config.json CHANGED
@@ -21,5 +21,5 @@
21
  "torch_dtype": "float16",
22
  "transformers_version": "4.34.0",
23
  "use_cache": true,
24
- "vocab_size": 32128
25
  }
 
21
  "torch_dtype": "float16",
22
  "transformers_version": "4.34.0",
23
  "use_cache": true,
24
+ "vocab_size": 32002
25
  }