inference and load model error

#1
by mindywu - opened

Hi,
I inference this model when I load this model it has some error.
How can I resolve this issue? Thank you.

code:

from transformers import DonutProcessor, VisionEncoderDecoderModel
processor = DonutProcessor.from_pretrained("Cdywalst/donut_multitask-shivi-recognition") 
model = VisionEncoderDecoderModel.from_pretrained("Cdywalst/donut_multitask-shivi-recognition")

error:
RuntimeError: Error(s) in loading state_dict for VisionEncoderDecoderModel:
size mismatch for decoder.model.decoder.embed_tokens.weight: copying a param with shape torch.Size([57536, 1024]) from checkpoint, the shape in current model is torch.Size([57525, 1024]).
You may consider adding ignore_mismatched_sizes=True in the model from_pretrained method.

Sign up or log in to comment