Is to possible to use Whisper pytorch model with different dtype than the entry in config.json?

#28
by ait-paca - opened

Hi,
Thanks for sharing this model and related work.

I downloaded the Whisper-Medium model using HF_hub snapshot download, by ignoring the patterns for msgpack, h5, and safetensors. Thereby, only pytorch_model.bin as the model file, and all the rest of the repo files are downloaded.

In config.json, the torch_dtype is given as float32. If I use default settings for pipeline or AutoConfig..., AutoModel..., it works fine. However, if I try using it with torch_dtype = torch.float16, it gives error of different sizes tensor.

Is it even possible to use (any) Whisper pytorch model with different dtype than for the entry in config.json or I'm making any basic mistake? If it is possible to use with different dtype, what adaptation I need to make?

Sign up or log in to comment