BenchmarkBot commited on
Commit
ad5bd56
β€’
1 Parent(s): 4cfc121

updated intro text

Browse files
Files changed (1) hide show
  1. src/assets/text_content.py +5 -5
src/assets/text_content.py CHANGED
@@ -3,9 +3,9 @@ TITLE = """<h1 align="center" id="space-title">πŸ€— Open LLM-Perf Leaderboard
3
  INTRODUCTION_TEXT = f"""
4
  The πŸ€— Open LLM-Perf Leaderboard πŸ‹οΈ aims to benchmark the performance (latency & throughput) of Large Language Models (LLMs) with different hardwares, backends and optimizations using [Optimum-Benchmark](https://github.com/huggingface/optimum-benchmark) and [Optimum](https://github.com/huggingface/optimum) flavors.
5
 
6
- Anyone from the community can request a model or a hardware+backend+optimization configuration for automated benchmarking:
7
- - Model requests should be made in the [πŸ€— Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) and will be added to the πŸ€— Open LLM-Perf Leaderboard πŸ‹οΈ automatically once they're publicly available. That's mostly because we don't want to benchmark models that don't have an evaluation score yet.
8
- - Hardware+Backend+Optimization requests should be made in the πŸ€— Open LLM-Perf Leaderboard πŸ‹οΈ [community discussions](https://huggingface.co/spaces/optimum/llm-perf-leaderboard/discussions) for open discussion about their relevance and feasibility.
9
  """
10
 
11
  SINGLE_A100_TEXT = """<h3>Single-GPU Benchmark (1xA100):</h3>
@@ -23,8 +23,8 @@ CITATION_BUTTON_TEXT = r"""@misc{open-llm-perf-leaderboard,
23
  publisher = {Hugging Face},
24
  howpublished = "\url{https://huggingface.co/spaces/optimum/llm-perf-leaderboard}",
25
  @software{optimum-benchmark,
26
- author = {Ilyas Moutawwakil},
27
  publisher = {Hugging Face},
28
- title = {Optimum-Benchmark: A framework for benchmarking the performance of Transformers models with different hardwares, backends and optimizations.},
29
  }
30
  """
 
3
  INTRODUCTION_TEXT = f"""
4
  The πŸ€— Open LLM-Perf Leaderboard πŸ‹οΈ aims to benchmark the performance (latency & throughput) of Large Language Models (LLMs) with different hardwares, backends and optimizations using [Optimum-Benchmark](https://github.com/huggingface/optimum-benchmark) and [Optimum](https://github.com/huggingface/optimum) flavors.
5
 
6
+ Anyone from the community can request a model or a hardware/backend/optimization configuration for automated benchmarking:
7
+ - Model evaluation requests should be made in the [πŸ€— Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) and will be added to the πŸ€— Open LLM-Perf Leaderboard πŸ‹οΈ automatically once they're publicly available.
8
+ - Hardware/Backend/Optimization performance requests should be made in the [community discussions](https://huggingface.co/spaces/optimum/llm-perf-leaderboard/discussions) to assess their relevance and feasibility.
9
  """
10
 
11
  SINGLE_A100_TEXT = """<h3>Single-GPU Benchmark (1xA100):</h3>
 
23
  publisher = {Hugging Face},
24
  howpublished = "\url{https://huggingface.co/spaces/optimum/llm-perf-leaderboard}",
25
  @software{optimum-benchmark,
26
+ author = {Ilyas Moutawwakil},
27
  publisher = {Hugging Face},
28
+ title = {Optimum-Benchmark: A framework for benchmarking the performance of Transformers models with different hardwares, backends and optimizations.},
29
  }
30
  """