Contaminated?

#6
by kno10 - opened

The intel neural chat data includes GSM8k, which is also part of the leaderboard test.
Hence this model is contaminated, the benchmark results are not reliable.

Regardless of any test sets leaking into the training data and inflating that score, this model's still a respectable effort.

@kno10 you mean this dataset https://huggingface.co/datasets/meta-math/MetaMathQA includes gsm8k?

https://huggingface.co/datasets/Intel/neural-chat-dataset-v2
which appears to be the latest Intel neuralchat data that I could find, contains
https://huggingface.co/datasets/TigerResearch/tigerbot-gsm-8k-en
which contains 8.79k rows, i.e., the full GSM 8k data set, including test.
This would explain the high performance in the GSM8k benchmark.

kno10 changed discussion title from Contaminated! to Contaminated?

But of course I do not know exactly what was used here, but the author mentions neuralchat.

[https://huggingface.co/datasets/Intel/neural-chat-dataset-v2](https://huggingface.co/datasets/Intel/neural-chat-dataset-v2
which appears to be the latest Intel neuralchat data that I could find)

That dataset wasn't used here (those are instruction tuning data).

The author made the whole code available as colab:
https://huggingface.co/CultriX/MistralTrix-v1/blob/main/MistralTrix.ipynb

You can see that it's this dataset he's loading (which is a completely different DPO fine-tuning dataset):
https://huggingface.co/datasets/Intel/orca_dpo_pairs

# Load dataset
dataset = load_dataset("Intel/orca_dpo_pairs")['train']

According to the colab, it is based not on pure mistral, but on

model_name = "zyh3826/GML-Mistral-merged-v1"

which is supposedly a mix of quantumaikr/quantum-v0.01 and mncai/mistral-7b-dpo-v5, neither of which appears to have documented training.
So it might still be contaminated, the performance is suspicious.

Sign up or log in to comment