Inconsistency between the results from the inference API and locally running model

#21
by Liii2101 - opened

Hi, thanks for the amazing work. I'm trying to run the inference locally but the results is different from the online inference API. do you know why? is it because of the preprocessing?

Hello, I encountered the same issue as well. Have you managed to resolve it?

Sign up or log in to comment