significant concerns regarding the ranking methodology and evaluation results reported by ParsBench

#2
by 22Sunnyy22 - opened

Hi,

Thank you for your effort and benchmarks. I just want to mention couple of things.

There are significant concerns regarding the ranking methodology and evaluation results reported by ParsBench, which I will outline below.

Ranking Methodology Issues:
ParsBench ranks models based on the average accuracy across several benchmarks without considering the differing difficulty levels and validity of these benchmarks. For example, they average results from datasets like Khayyam, where model performance is generally below 50%, with benchmarks like ParsiNLU and Persian Math, where performance hovers around 70-80%. This method disregards the differences in difficulty and data quality between these datasets, leading to potentially misleading rankings. Additionally, the datasets used in the evaluation have varying levels of difficulty and data cleanliness, which are not accounted for in the ranking process. Some datasets might not be well-curated, further affecting the reliability of the results.

Inconsistent Persian MMLU Results:
The results reported for Persian MMLU are inconsistent with other evaluations. For instance, GPT-4 surprisingly scores only 31%, which is unexpectedly low. Moreover, the recent evaluation of JabirLLM-400B on Persian MMLU resulted in an accuracy of about 29%, while LLaMA 3 (70B) scored 37%. This suggests that the model’s understanding of Persian may have been compromised, assuming the evaluation is correct.

Anomalous Results with Qwen Models:
A particularly strange finding is that the Qwen 2 (7B) model outperforms the Qwen 2 (70B) model on the Persian MMLU benchmark, which contradicts the expectation that a larger model would generally perform better.

ParsBench org

Hi,
Thanks for sharing your thoughts on the leaderboard.
This leaderboard is my first try of building a leaderboard and the beginning of the path to an official leaderboard with valuable datasets.
So for the ranking methodology issues, I have no argue. We (now as a team) are working on a more reliable and valuable leaderboard workflow which will be released soon.

For the inconsistent Persian MMLU results, I used zero-shot prompts due to the lack of having enough computing power and credits of using OpenAI models. Most of the current leaderboard and benchmarks are using 5-shot prompts for the MMLU task. So this is may be one of the main reasons of different results.

And the last one, we have ran the benchmarks on the Qwen2-7B and Qwen2-70B with the same methods. No change in the configs or benchmark parameters.

We're working on these evaluation methods and datasets and very soon we will release a new version of the ParsBench leaderboard.

Thanks again for sharing your concerns.

Sign up or log in to comment