upload LongEmbed results
Hi! I'm trying to upload results on LongEmbed. As an attempt, I have only uploaded that of e5-mistral up until now. There is one thing I'm not quite sure about, regarding the passkey and needle task. While for the other four tasks, we simply report the ndcg@10 score of test split, these 2 tasks have many splits ({test_256, test_512, test_1024, test_2048, test_4096, test_8192, test_16384, test_32768}) , and we average over the ndcg@1 score of each split to represent the final score on these two tasks. In this case, do I need to do something or the mteb leaderboard will handle it automatically?
I think the result files are good as is, but we will need some code to average them either in the results.py here or in the leaderboard. I think it is best to do it in the results.py file but I may be wrong.
Do you want to add the other result files? Then I can merge this & take care of adding the leaderboard tab etc :)
yes, sorry for the late reply! I will add other result files recently :-)
Hi @Muennighoff , I've added more result files. Really appreciate your support!
Some results are still missing no? cc @dwzhu E.g. OpenAI etc.
I've added a preliminary leaderboard tab here: https://huggingface.co/spaces/mteb/leaderboard?task=retrieval&language=longembed
Feel free to edit the tab / add more results. It seems like they don't quite match your paper yet, maybe you could investigate why? (It's probably the metrics i.e. averaging ndcg_at_10 - maybe we need to allow multiple main metrics for a single leaderboard tab; feel free to open a PR on the leaderboard code if that is the case) π
I've added a preliminary leaderboard tab here: https://huggingface.co/spaces/mteb/leaderboard?task=retrieval&language=longembed
Feel free to edit the tab / add more results. It seems like they don't quite match your paper yet, maybe you could investigate why? (It's probably the metrics i.e. averaging ndcg_at_10 - maybe we need to allow multiple main metrics for a single leaderboard tab; feel free to open a PR on the leaderboard code if that is the case) π
Really appreciate your generous help! I guess it is because the scores we report for Passkey and Needle are Acc@1 (or nDCG@1, they are all the same), but the scores in leaderboard are nDCG@10. @Muennighoff
I would also like to ask, what kind of modifications do I need to make, so that I can get the results on leaderboard right? Is it just modify this: https://github.com/embeddings-benchmark/mteb/blob/fe912bc2d676772ce059522639bcd14e9c80ecb2/mteb/tasks/Retrieval/eng/LEMBPasskeyRetrieval.py#L33C21-L33C31 ? And, on the leaderboard page, I notice that the Metric
is explained as follows: Metric: Normalized Discounted Cumulative Gain @ k (ndcg_at_10)
. What if I want to explain that for Passkey and Needle, the metric is ndcg_at_1?
I think it requires some changes in the code here: https://huggingface.co/spaces/mteb/leaderboard/blob/main/app.py
ok, I'll try that
That'd be amazing - feel free to open a PR! π
Lmk when the other results are ready so we can merge them in this repo too
Thanks! I have opened a PR here https://huggingface.co/spaces/mteb/leaderboard/discussions/124. I have not managed to run the demo myself due to some networking issues, but I guess the modifications will work. Looking forward to your advice! @Muennighoff