Different inference results from local transformer vs inference API

#10
by logdeb - opened

I am getting two slightly different probability values when comparing inference results from the local transformer and inference API on the same sentence. I am wondering why this is happening? It only occurs for some sentences.

Screen Shot 2023-02-13 at 7.31.59 PM.png

Moreover, the local transformer seems to select the highest probability result and return it alone compared to the API that returns a score for each label. Sometimes a score from the API is greater than 1 (have seen 9) and I am wondering why that is and am if the model is still functioning properly.

Cheers!

same issue here. I stopped using this model due to inconsistencies and lack of explanation.

Sign up or log in to comment