KennethEnevoldsen's picture
Added reference to ongoing issue
9b02ab2 verified
|
raw
history blame
No virus
2.95 kB
metadata
license: cc-by-4.0
configs:
  - config_name: mteb_en
    data_files: leaks_and_duplications_MTEB_EN.csv
    sep: ;
  - config_name: mteb_fr
    data_files: leaks_and_duplications_MTEB_FR.csv
    sep: ;
size_categories:
  - n<1K

LLE MTEB

This dataset lists the presence or absence of leaks and duplicate data in the datasets constituting the MTEB leaderboard (EN & FR).

For more information concerning the methodology and find out what the column names correspond to, please consult the following blog post.
To keep things simple, we invite the reader to read the percentages indicated in the text_and_label_test_biased column, which correspond to the proportion of biased data in the test split of the dataset in question.
Rows containing "OK" are datasets containing only one test split. In the absence of train or validation splits, there can be no leaks.

MTEB EN

For the English part, we evaluated the quality of all the datasets present in the file run_mteb_english.
We can observe that 24% of MTEB EN datasets contain leaks (up to 6.3% of the test split).

MTEB FR

For the French part, we evaluated the quality of all the datasets present in the file run_mteb_french.
Note: we were unable to download the datasets for the XPQARetrieval (jinaai/xpqa) and MintakaRetrieval (jinaai/mintakaqa) tasks due to encoding problems. We therefore used the original Amazon datasets available on Github. There may well be a difference between what's on MTEB and what's on Github. So, in the following we give results without taking these datasets into account (24 datasets instead of 26) although the reader can find in this dataset, the results we get with datasets coming from GitHub.
We can observe that 46% of MTEB FR datasets contain leaks (indicative figure until the 7 missing datasets can be evaluated).

Global

It should be noted that the percentages reported are individual evaluations of the datasets. Biases may be greater than this in reality.
Indeed, if you concatenate datasets (for example, all the train splits available for the STS task in a given language), a data in the train split of dataset A may not be present in the test split of A, but may be present in the test split of dataset B, thus creating a leak. The same logic applies to duplicate data.

We therefore invite users to take care when training their models (and even to avoid using train splits from all the datasets listed here as having leaks).
We have also reached out to the MTEB maintainers who are currently looking into cleaning up their leaderboards to maintain users' confidence in their tool for evaluating or choosing a model for their practical case.