= evaluate.EvaluationSuite.load("SUSTech/tlem", download_mode="force_redownload")
suite "gsm8k") # You can check the available datasets by suite.supported_datasets suite.load(
Transparent LLMs Evaluation Metrics
Introducing TLEM: The Future of Language Model Evaluation 🌐✨
In an era where the globe is racing to train and launch increasingly sophisticated language models, there’s a pressing need for a unified standard to gauge their effectiveness. That’s where TLEM, or Transparent LLMs Evaluation Metrics – a nod to the French phrase tout le monde (everyone) – steps in. TLEM is not just another framework; it’s a revolution in the way we assess language models. Its name embodies our commitment to transparency and decentralization in the evaluation of large language models.
🌟 Why TLEM? Here’s Why!
Universal Standardization: With the international community eager to develop and unveil large language models, TLEM offers a much-needed standardized criterion to differentiate the good from the great.
Developer & User-Friendly: Existing open-source implementations often suffer from deep encapsulation, posing challenges for both developers and users. TLEM changes the game by being incredibly user-friendly and accessible.
Addressing the Self-Evaluation Bias: A common hurdle in the current landscape is the tendency of models to self-evaluate, leading to a reliance on their own assessments and only referencing open-source evaluations. This has resulted in redundant efforts and reduced reproducibility within the open-source community. TLEM tackles this issue head-on.
Designed for Ease and Decentralization: TLEM stands out with its extreme ease of use. Forget the hassle of manually pulling repositories and installing – TLEM simplifies it all. Moreover, its metrics are designed to be decentralized, empowering users to extend and contribute new evaluation metrics, fostering a community-driven approach.
🚀 Join the TLEM Movement!
TLEM is more than a framework; it’s a movement towards a more transparent, decentralized, and community-driven future in language model evaluation. Be a part of this exciting journey. Dive into the world of TLEM, where every contribution counts, and every evaluation brings us closer to excellence in language model development.
Let’s shape the future together with TLEM! 🌟💻🔍
Usage
Start evaluating your model in 3 line
You can start evaluating your model with TLEM in 3 lines, tlem is designed to work without installing.
:= lambda x: x) suite.run(pipe
<class 'evaluate_modules.metrics.sustech--tlem.a09e0e4b7368f89944eb7781a52f3519caa4ffb8677312fbb90e48a613c8efdc.tlem.ReasoningMetric'>
{'gsm8k': 0.022744503411675512}
The lambda function indicate a model pipeline which takes a list of string as input and return a list of string as output. You can use any model you want, as long as it can be wrapped in this way. We use the most popular VLLM and Openai API as an example:
= aiohttp.ClientSession(timeout=aiohttp.ClientTimeout(total=60 * 60 * 24 * 7))
session = "xxx"
url = AsyncOpenAI(**{"base_url": f"http://{url}/v1/", "api_key": "EMPTY"})
client
@suite.utils.async_pipe
async def chatgpt(msg):
input = f"### Human: {msg}\n\n### Assistant: "
try:
= await client.completions.create(
resp ="gpt-3.5-turbo",
model=None,
max_tokens=input,
prompt=0,
temperature
)return resp.choices[0].text
except Exception as e:
return "OpenAI Error"
@suite.utils.async_pipe
async def vllm(msg):
input = f"### Human: {msg}\n\n### Assistant: "
= {
data "prompt": input,
"max_tokens": 4096,
"n": 1,
"temperature": 0,
}
try:
async with session.post(f"http://{url}/generate", json=data) as response:
= await response.json()
response_json return response_json["text"][0][len(input) :]
except Exception as e:
return "Vllm Error"
Hackable
TLEM is designed to be hackable. Every tlem is a task
in the suite
, suite.run
just run all the tasks in the suite. For each task, you can check it’s input, label and output by
= suite[0]
task # task.outputs is avaliable after suite.run or task.run
"input": task.samples, "label": task.labels, "output": task.outputs}) pd.DataFrame({
input | label | output | |
---|---|---|---|
0 | Janet’s ducks lay 16 eggs per day. She eats th... | Janet sells 16 - 3 - 4 = <<16-3-4=9>>9 duck eg... | Janet’s ducks lay 16 eggs per day. She eats th... |
1 | A robe takes 2 bolts of blue fiber and half th... | It takes 2/2=<<2/2=1>>1 bolt of white fiber\nS... | A robe takes 2 bolts of blue fiber and half th... |
2 | Josh decides to try flipping a house. He buys... | The cost of the house and repairs came out to ... | Josh decides to try flipping a house. He buys... |
3 | James decides to run 3 sprints 3 times a week.... | He sprints 3*3=<<3*3=9>>9 times\nSo he runs 9*... | James decides to run 3 sprints 3 times a week.... |
4 | Every day, Wendi feeds each of her chickens th... | If each chicken eats 3 cups of feed per day, t... | Every day, Wendi feeds each of her chickens th... |
... | ... | ... | ... |
1314 | John had a son James when he was 19. James is... | Dora is 12-3=<<12-3=9>>9\nSo James is 9*2=<<9*... | John had a son James when he was 19. James is... |
1315 | There are some oranges in a basket. Ana spends... | There are 60 minutes in an hour. Ana peels an ... | There are some oranges in a basket. Ana spends... |
1316 | Mark's car breaks down and he needs to get a n... | The discount on the radiator was 400*.8=$<<400... | Mark's car breaks down and he needs to get a n... |
1317 | Farmer Brown has 20 animals on his farm, all e... | Let C be the number of chickens.\nThere are 20... | Farmer Brown has 20 animals on his farm, all e... |
1318 | Henry and 3 of his friends order 7 pizzas for ... | There are 7*8=<<7*8=56>>56 slices in total.\nT... | Henry and 3 of his friends order 7 pizzas for ... |
1319 rows × 3 columns
and you can verify our metric by
task.metric(task.labels, task.labels)
{'gsm8k': 1.0}
task.metric(task.outputs, task.labels)
{'gsm8k': 0.022744503411675512}
Contribution
You can easily add your own task by inheriting the Task
class. For example, if you want to add a task to evaluate the model’s ability to generate a specific type of text, you can do it in this way:
= suite.task_class(
task =("gsm8k", "main"),
dataset_name="question",
input_column="answer",
label_column="evaluate-metric/competition_math",
metric_name
) task.run(pipe)
<class 'evaluate_modules.metrics.evaluate-metric--competition_math.b85814e0172dae97fa4bd6eff6f33caba2ff9547860acabd50222c6dee474a24.competition_math.CompetitionMathMetric'>
{'accuracy': 0.0}
where the metric can be put in any huggingface space, TLEM is designed to be decentralized, allowing you to run evaluations on private datasets without the need to contribute your code back to TLEM. You can also define the metric locally:
def my_metric(responses, references):
# return .99
= [random.choices([0, 1]) for resp, ans in zip(responses, references)]
scores return np.mean(scores)
= my_metric
task.metric task.run(pipe)
0.5140257771038665
TLEM Leaderboard
If you wish to add your model results to the TLEM leaderboard, you are required to provide the code used for running TLEM and its outcomes in your model card. We do not actively replicate your code; you are responsible for the accuracy of your results.
mmlu-chat | cmmlu-chat | ceval-chat | gsm8k | BBH | MATH | average | |
---|---|---|---|---|---|---|---|
model | |||||||
SUS-Chat-34B | 77.35 | 78.68 | 82.42 | 80.06 | 67.62 | 28.80 | 69.155000 |
Qwen-72B-Chat | 74.52 | 77.02 | 77.22 | 76.57 | 72.63 | 35.90 | 68.976667 |
DeepSeek-67B-Chat | 69.43 | 48.51 | 59.70 | 74.45 | 69.73 | 29.56 | 58.563333 |
Yi-34B-Chat | 66.96 | 55.16 | 77.16 | 63.76 | 61.54 | 10.02 | 55.766667 |
OrionStar-34B | 68.51 | 66.88 | 65.13 | 54.36 | 62.88 | 12.80 | 55.093333 |
TLEM leaderboard
Embrace the change. Embrace TLEM.