Model output is non-deterministic when used via HuggingFace for reranking.

#4
by rkarampatsis - opened

I tried loading this model from HuggingFace with the transformers library and use it for reranking.
However, every time I rerun my code and the model is reloaded the scores are vastly different from before and very inconsistent.
I have assumed that the second out of the two outputs corresponds to the relevant class.
I put the model in eval mode and set seeds just in case but the issue still persists.

Here is a short code snippet to reproduce the issue:
https://gist.github.com/mpatsisDeus/85c8c3a7c9dc2ba12020d7dbffef7c91

Sign up or log in to comment