Deployment on Inference Endpoints (Dedicated)

#17
by khalil235 - opened

Hi there,
I am trying to deploy this model on Dedicated inference endpoint provided by HuggingFace. After deploying it the model is return a score between 0 and onwards but using it with transformers give us negative values as well. Is there any way we can deploy this on Dedicated inference of huggingface without changing the score compute ?

Beijing Academy of Artificial Intelligence org

Hi, @khalil235 , the inference endpoint uses a sigmoid function to normalize the score. The function is monotonically increasing, so using it will not change the ranking of the scores ( a larger score will be a larger normalized score). Therefore, there is no need to delete this function.

I have some logic based on the score that was in negative in my project. I want to keep that as it is. Is there any way I can remove this sigmoid function and not let it modify the score range I was getting with FlagEmbeddings package ?

Beijing Academy of Artificial Intelligence org

You can use an inverse function to get the original score: https://stackoverflow.com/questions/10097891/inverse-logistic-sigmoid-function

Beijing Academy of Artificial Intelligence org
edited 8 days ago

Hi, @khalil235 , we find that the implementation of the inference endpoint provided by HuggingFace is not true for reranker model. We recommend not using it.
You can use it with : https://github.com/huggingface/text-embeddings-inference

Hi @khalil235 ,

You can create a custom handler to do custom functionality in an endpoint

Sign up or log in to comment