陈俊杰 commited on
Commit
da31ad5
1 Parent(s): 3d4d37c
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -138,7 +138,7 @@ if page == "Introduction":
138
  <div style='font-size: 48px;line-height: 1.8;'>
139
  The Automatic Evaluation of LLMs (AEOLLM) task is a new core task in [NTCIR-18](http://research.nii.ac.jp/ntcir/ntcir-18) to support in-depth research on large language models (LLMs) evaluation. As LLMs grow popular in both fields of academia and industry, how to effectively evaluate the capacity of LLMs becomes an increasingly critical but still challenging issue. Existing methods can be divided into two types: manual evaluation, which is expensive, and automatic evaluation, which faces many limitations including the task format (the majority belong to multiple-choice questions) and evaluation criteria (occupied by reference-based metrics). To advance the innovation of automatic evaluation, we proposed the Automatic Evaluation of LLMs (AEOLLM) task which focuses on generative tasks and encourages reference-free methods. Besides, we set up diverse subtasks such as summary generation, non-factoid question answering, text expansion, and dialogue generation to comprehensively test different methods. We believe that the AEOLLM task will facilitate the development of the LLMs community.
140
  </div>
141
- """)
142
 
143
  elif page == "Methodology":
144
  st.header("Methodology")
 
138
  <div style='font-size: 48px;line-height: 1.8;'>
139
  The Automatic Evaluation of LLMs (AEOLLM) task is a new core task in [NTCIR-18](http://research.nii.ac.jp/ntcir/ntcir-18) to support in-depth research on large language models (LLMs) evaluation. As LLMs grow popular in both fields of academia and industry, how to effectively evaluate the capacity of LLMs becomes an increasingly critical but still challenging issue. Existing methods can be divided into two types: manual evaluation, which is expensive, and automatic evaluation, which faces many limitations including the task format (the majority belong to multiple-choice questions) and evaluation criteria (occupied by reference-based metrics). To advance the innovation of automatic evaluation, we proposed the Automatic Evaluation of LLMs (AEOLLM) task which focuses on generative tasks and encourages reference-free methods. Besides, we set up diverse subtasks such as summary generation, non-factoid question answering, text expansion, and dialogue generation to comprehensively test different methods. We believe that the AEOLLM task will facilitate the development of the LLMs community.
140
  </div>
141
+ """, unsafe_allow_html=True)
142
 
143
  elif page == "Methodology":
144
  st.header("Methodology")