wenjiao commited on
Commit
5989add
1 Parent(s): 61862ec

Update src/display/about.py

Browse files
Files changed (1) hide show
  1. src/display/about.py +3 -0
src/display/about.py CHANGED
@@ -16,6 +16,9 @@ LLM_BENCHMARKS_TEXT = f"""
16
  ## ABOUT
17
  Quantization is a key technique for making LLMs more accessible and practical for a wide range of applications, especially where computational resources are a limiting factor. While there is no tool to track and compare quantization LLMs with different quantization algorithms, which is hard to filter out the genuine progress that is being made by the open-source community and which model is the current state of the art.
18
 
 
 
 
19
  Submit a model for automated evaluation on the CPU/GPU cluster on the "Submit" page!
20
  The leaderboard's backend runs the great [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) - read more details below!
21
 
 
16
  ## ABOUT
17
  Quantization is a key technique for making LLMs more accessible and practical for a wide range of applications, especially where computational resources are a limiting factor. While there is no tool to track and compare quantization LLMs with different quantization algorithms, which is hard to filter out the genuine progress that is being made by the open-source community and which model is the current state of the art.
18
 
19
+ ### Introducing V1.0
20
+ V1.0 marks the launch of our comprehensive evaluation system for quantized language models.
21
+ This version introduces a standardized platform for submitting and benchmarking models, providing clear insights into their performance across various tasks.
22
  Submit a model for automated evaluation on the CPU/GPU cluster on the "Submit" page!
23
  The leaderboard's backend runs the great [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) - read more details below!
24