Dataset description

#5
by bappadityashome - opened

As per the dataset description, random rules should be missing from the public test dataset.
However, looking into the dataset, I found the public test set to have the same number of columns as the public train set without any missing value.

So my question is, is the public dataset the validation set? If yes, which is the actual test set?

Also, in the sample_submission.csv file, there is a rank column. It is unclear how to determine the rank of a rule.

Wyze Labs org

To clarify further - the public test set should not be treated as a validation set, as it contains different users than the training data. The users in the public test set are completely exclusive from those in the public training set. For each user in the test set, we have omitted one random rule (row) that was present for that user in the original data.

The goal is for the recommendation model to be able to predict this missing rule and rank it highly in the list of recommended rules. During prediction, the model will assign a probability score to each potential recommended rule. By ranking based on these scores, we can evaluate whether the model properly prioritizes the omitted rule for each user.

The submission file requires predicted rules to be ranked in this way, so the priority of the recommendations can be analyzed. The rank column just indexes the rules by their priority according to the model's scores. It does not imply any specific numerical scores.

mmkamani7 changed discussion status to closed

Sign up or log in to comment