special_tokens_map.json is customized or outdated.

#5
by GroundSpyder - opened

I loaded the model into oogabooga just as showing in this video: https://www.youtube.com/watch?v=k-LUHw4Hb_w&t=145s

But when I load the model I get the following:
2023-07-28 08:06:59 INFO:Loading georgesung_llama2_7b_chat_uncensored...
Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:12<00:00, 4.14s/it]
2023-07-28 08:07:12 WARNING:models\georgesung_llama2_7b_chat_uncensored\special_tokens_map.json is different from the original LlamaTokenizer file. It is either customized or outdated.
2023-07-28 08:07:12 INFO:Loaded the model in 12.68 seconds.

And when I try to generate any kind of response I get the following error and no response:
Traceback (most recent call last):
File "C:\oobabooga_windows\text-generation-webui\modules\callbacks.py", line 55, in gentask
ret = self.mfunc(callback=_callback, *args, **self.kwargs)
File "C:\oobabooga_windows\text-generation-webui\modules\text_generation.py", line 293, in generate_with_callback
shared.model.generate(**kwargs)
File "C:\oobabooga_windows\installer_files\env\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\oobabooga_windows\installer_files\env\lib\site-packages\transformers\generation\utils.py", line 1335, in generate
and torch.sum(inputs_tensor[:, -1] == generation_config.pad_token_id) > 0
IndexError: index -1 is out of bounds for dimension 1 with size 0
Output generated in 0.25 seconds (0.00 tokens/s, 0 tokens, context 0, seed 1811491874)

Any idea what I have done wrong?

Sign up or log in to comment