Trying to understand probability threshold for determining accuracy of model

#3
by garrettbaber - opened

Hello, Thank you for a great model! I'm new to NLP so forgive me if this is an obvious question. But what were your probability thresholds (or whatever they're called) that you considered an emotion label to be correctly identified? Was it .5? I'm checking because I've read it can vary, but did not see it listed in your model card. If it matters, I'm specifically wanting to know the accuracy metric of just the "fear" label, as I'm using this in my research. Going by my small dataset of reported dreams and dream emotions, the model with a .5 probability cut-off had a true positive rate of 57%, a false positive rate of 19%, a true negative rate of 77%, and a false negative rate of 43%.

Sign up or log in to comment