AI & ML interests

We create LLMs benchmark and update it every half month to make it "uncheatable".

Organization Card
About org cards

Logo of Selective Context

"Uncheatable" LLMs Evaluation - LatestEval

Humans receive new test questions every exam, but LLMs? They've been evaluated with the same benchmarks for too long. Why not assess LLMs with fresh test just like we test our students? In this project, we introduce LatestEval, which automatically constructs language model benchmarks using the latest materials (e.g., arXiv, BBC, GitHub, etc.) to prevent "cheating" and data contamination.

News!!

Key Features

  1. We maintain a QA benchmark that updates every half month using the latest online resources (created in the past half month). This approach aims to avoid 1) LLMs being trained on the test set (cheating); and 2) the unintentional inclusion of test questions in the training dataset (data contamination).
  2. We analyzed real Human-AI conversations to ensure the automated benchmark aligns well with real-life applications (see paper for more detail).

models

None public yet