Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:

How to Evaluate Model Performance with 'Unlabelled' Test Data in ‘twitter_complaints’

#2
by lx7 - opened

I am working with the Hugging Face Ought/raft/twitter_complaints dataset and I have noticed that the test dataset is labeled as "unlabelled". Due to this, I am facing challenges in calculating accuracy.

I would like to seek advice from the community on how to handle this situation. If anyone has experience or suggestions on alternative methods or strategies for evaluating model performance in the absence of labels in the test dataset, I would greatly appreciate your input.

Thank you for your help!

Excuese me, I encountered the same problem as you, how did you solve it in the end?

Did any of you guys solve the problem?

me same issues... how to get it?

The test subset is worse than useless for actual evaluation. It may be useful as a set of examples to be labeled by a model. But that's about it.

Sign up or log in to comment