llm-perf-leaderboard / src /assets /text_content.py
BenchmarkBot's picture
added submissions protocol
534ff40
raw
history blame
805 Bytes
TITLE = """<h1 align="center" id="space-title">πŸ€— Open LLM-Perf Leaderboard</h1>"""
INTRODUCTION_TEXT = f"""
The πŸ€— Open LLM-Perf Leaderboard aims to benchmark the performance (latency & throughput) of Large Language Models (LLMs) on different backends and hardwares using [Optimum-Benchmark](https://github.com/huggingface/optimum-benchmark)
πŸ€— Anyone from the community can submit a model or hardware+backend configuration for automated benchmarking on the πŸ€— GPU cluster.
Model submissions should be made in the [πŸ€— Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
Hardware+Backend submissions should be made in the [πŸ€— Open LLM-Perf Leaderboard's community discussions](https://huggingface.co/spaces/optimum/llm-perf-leaderboard/discussions).
"""