How to Evaluate Model Performance with 'Unlabelled' Test Data in ‘twitter_complaints’

#2
by lx7 - opened

I am working with the Hugging Face Ought/raft/twitter_complaints dataset and I have noticed that the test dataset is labeled as "unlabelled". Due to this, I am facing challenges in calculating accuracy.

I would like to seek advice from the community on how to handle this situation. If anyone has experience or suggestions on alternative methods or strategies for evaluating model performance in the absence of labels in the test dataset, I would greatly appreciate your input.

Thank you for your help!

Excuese me, I encountered the same problem as you, how did you solve it in the end?

Did any of you guys solve the problem?

me same issues... how to get it?

Sign up or log in to comment