The Test dataset Has Some unlabel(-1) items

#3
by Siki-77 - opened

I found the test dataset has some items tagged by -1. So weird.

Indeed, it's a common way to avoid giving the answer, for test datasets. See glue for example:

Capture d’écran 2023-12-19 à 09.25.01.png

however, there are a lot of -1s in the train and validation set.

That is explained in the dataset card and in the corresponding paper: about 2% of cases do not have a gold label because there was not enough consensus among the annotators.

For each pair that we validated, we assigned a gold label. If any one of the three labels was chosen by at least three of the five annotators, it was chosen as the gold label. If there was no such consensus, which occurred in about 2% of cases, we assigned the placeholder label ‘-’. While these unlabeled examples are included in the corpus distribution, they are unlikely to be helpful for the standard NLI classification task, and we do not include them in either training or evaluation in the experiments that we discuss in this paper.

albertvillanova changed discussion status to closed

Sign up or log in to comment