Details about the optimizations used

#1
by BrunoHays - opened

Hello Xenova,

I stumbled upon this repo while I was trying to optimize Whisper inference with Optimum.
I compared the inference speed and memory footprint of your onnx models with mine and noticed that yours were faster and needed less memory.

Can you share any insight or the optimization config that you have used ?

Cheers

Sign up or log in to comment