Location of text_zero_shot_classification code

by breakend - opened

Hi, sorry to bother you! I am trying to reproduce some results to match the text_zero_shot_classification pipeline. I was wondering if you could point me to the location of the code for how that evaluation is done exactly? I'd like to try running the exact method locally, but can't seem to find the location. Thanks!

Evaluation on the Hub org

Hey @breakend , unfortunately that part of the codebase is closed-source since it runs on AutoTrain. However, we might be able to share the evaluation logic itself - gently pinging @mathemakitten for this :)

Hi @breakend , apologies for the late reply! For each piece of text and the completions, we concatenate the completions with the text and then run inference for both. We compare the likelihoods by taking the logits for each and then summing the log-probabilities. We normalize for length by dividing by the number of tokens, then compare with the true label to see if the model was more likely to output the correct ending or a different one, then use this for our accuracy calculation. Let me know if I can clarify any of this further!

Sign up or log in to comment