运行到apply_chat_template时报错KeyError

#22
by FearandDreams - opened

使用的是transformer = 4.40.0
具体报错:
KeyError: '[gMASK]'
Traceback (most recent call last):
File "/data/lizhe/VLMToolMisuse/train_adversarial_image.py", line 315, in
attack(
File "/data/lizhe/VLMToolMisuse/train_adversarial_image.py", line 208, in attack
normal_answer = model.generate_during_train(prompt, image_tensor, max_gen_len=128)
File "/data/lizhe/VLMToolMisuse/glmModel.py", line 122, in generate_during_train
response, history = self.base_model.chat_4v_9b(
File "/data/xiaohan/miniconda3/envs/tool_misuse/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/data/lizhe/VLMToolMisuse/glm_4v_9b/modeling_chatglm.py", line 1109, in chat_4v_9b
inputs = tokenizer.apply_chat_template(history, add_generation_prompt=True, tokenize=True,
File "/data/lizhe/VLMToolMisuse/glm_4v_9b/tokenization_chatglm.py", line 236, in apply_chat_template
result = handle_single_conversation(conversation)
File "/data/lizhe/VLMToolMisuse/glm_4v_9b/tokenization_chatglm.py", line 172, in handle_single_conversation
input_ids = self.get_prefix_tokens() if add_special_tokens else []
File "/data/lizhe/VLMToolMisuse/glm_4v_9b/tokenization_chatglm.py", line 134, in get_prefix_tokens
prefix_tokens = [self.convert_tokens_to_ids("[gMASK]"), self.convert_tokens_to_ids("")]
File "/data/xiaohan/miniconda3/envs/tool_misuse/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 645, in convert_tokens_to_ids
return self._convert_token_to_id_with_added_voc(tokens)
File "/data/xiaohan/miniconda3/envs/tool_misuse/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 658, in _convert_token_to_id_with_added_voc
return self._convert_token_to_id(token)
File "/data/lizhe/VLMToolMisuse/glm_4v_9b/tokenization_chatglm.py", line 99, in _convert_token_to_id
return self.mergeable_ranks[token]
KeyError: '[gMASK]'

Knowledge Engineering Group (KEG) & Data Mining at Tsinghua University org

升级transformers到4.40呢

@zRzRzRzRzRzRzR 您好,目前用的就是4.40,具体报错如下:
transformers.version: 4.40.0
0%| | 0/2 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/data/lizhe/VLMToolMisuse/train_adversarial_image.py", line 315, in
attack(
File "/data/lizhe/VLMToolMisuse/train_adversarial_image.py", line 208, in attack
normal_answer = model.generate_during_train(prompt, image_tensor, max_gen_len=128)
File "/data/lizhe/VLMToolMisuse/glmModel.py", line 122, in generate_during_train
response, history = self.base_model.chat_4v_9b(
File "/data/xiaohan/miniconda3/envs/tool_misuse/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/data/lizhe/VLMToolMisuse/glm_4v_9b/modeling_chatglm.py", line 1109, in chat_4v_9b
inputs = tokenizer.apply_chat_template(history, add_generation_prompt=True, tokenize=True,
File "/data/lizhe/VLMToolMisuse/glm_4v_9b/tokenization_chatglm.py", line 236, in apply_chat_template
result = handle_single_conversation(conversation)
File "/data/lizhe/VLMToolMisuse/glm_4v_9b/tokenization_chatglm.py", line 211, in handle_single_conversation
message_prefix = self.convert_tokens_to_ids(
File "/data/xiaohan/miniconda3/envs/tool_misuse/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 649, in convert_tokens_to_ids
ids.append(self._convert_token_to_id_with_added_voc(token))
File "/data/xiaohan/miniconda3/envs/tool_misuse/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 658, in _convert_token_to_id_with_added_voc
return self._convert_token_to_id(token)
File "/data/lizhe/VLMToolMisuse/glm_4v_9b/tokenization_chatglm.py", line 99, in _convert_token_to_id
return self.mergeable_ranks[token]
KeyError: '<|begin_of_image|>'

Knowledge Engineering Group (KEG) & Data Mining at Tsinghua University org

推理代码给我看看呢

tokenizer = AutoTokenizer.from_pretrained(
MODEL_PATH,
trust_remote_code=True,
encode_special_tokens=True
)
我是这么载入的正常

@zRzRzRzRzRzRzR ,您好,因为需要修改一下apply_chat_template里面的内容所以我没用AutoTokenizer,我大致是这么用的:
from glm_4v_9b.tokenization_chatglm import *

tokenizer = ChatGLM4Tokenizer("glm_4v_9b/tokenizer.model")
tokenizer.model是从这边的files and versions里面拿的

Sign up or log in to comment