Duplicates in splits

#2
by slvnwhrl - opened

Hi,

so first of all: thank you for mteb, it's a great resource! I was looking at this dataset more closely to understand which portion of the original data you used for the clustering task. I realised that there are a lot of duplicates in the splits. Is this on purpose? Or am I missing something? I couldn't find a script for this set on GitHub (https://github.com/embeddings-benchmark/mteb/tree/main/scripts) so I'm asking here.

from datasets import load_dataset

# same revision as in https://github.com/embeddings-benchmark/mteb/blob/main/mteb/tasks/Clustering/TwentyNewsgroupsClustering.py
dataset = load_dataset("mteb/twentynewsgroups-clustering", revision="6125ec4e24fa026cec8a478383ee943acfbd5449")

# unique elements per split
print([len(set(dataset["test"][i]["sentences"])) for i in range(10)])
# outputs: [835, 1533, 2198, 2703, 3204, 3693, 4112, 4523, 4895, 5241]

# elements per split
print([len(dataset["test"][i]["sentences"]) for i in range(10)])
# outputs: [1000, 2101, 3202, 4303, 5404, 6505, 7606, 8707, 9808, 10909]

Thank you for your help!

Massive Text Embedding Benchmark org

I sadly no longer have the original scripts to create the splits. But it splits were created randomly, so it is quite likely that there is overlap between the splits.

Sign up or log in to comment