mlabonne commited on
Commit
0ebce0d
β€’
1 Parent(s): d265ff6

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +3 -3
app.py CHANGED
@@ -86,7 +86,7 @@ def main():
86
  st.set_page_config(page_title="YALL - Yet Another LLM Leaderboard", layout="wide")
87
 
88
  st.title("πŸ† YALL - Yet Another LLM Leaderboard")
89
- st.markdown("Leaderboard made with [🧐 LLM AutoEval](https://github.com/mlabonne/llm-autoeval) using [Nous](https://huggingface.co/NousResearch) benchmark suite. It's a collection of my own evaluations.")
90
 
91
  content = create_yall()
92
  tab1, tab2 = st.tabs(["πŸ† Leaderboard", "πŸ“ About"])
@@ -137,7 +137,7 @@ def main():
137
  help="Number of likes on Hugging Face",
138
  format="%d ❀️",
139
  ),
140
- "URL": st.column_config.LinkColumn("App URL"),
141
  },
142
  hide_index=True,
143
  )
@@ -179,7 +179,7 @@ def main():
179
 
180
  ### Reproducibility
181
 
182
- You can easily reproduce these results using [🧐 LLM AutoEval](https://github.com/mlabonne/llm-autoeval/tree/master), a colab notebook that automates the evaluation process (benchmark: `nous`). This will upload the results to GitHub as gists. You can find the entire table with the links to the detailed results [here](https://gist.github.com/mlabonne/90294929a2dbcb8877f9696f28105fdf).
183
 
184
  ### Clone this space
185
 
 
86
  st.set_page_config(page_title="YALL - Yet Another LLM Leaderboard", layout="wide")
87
 
88
  st.title("πŸ† YALL - Yet Another LLM Leaderboard")
89
+ st.markdown("Leaderboard made with 🧐 [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) using [Nous](https://huggingface.co/NousResearch) benchmark suite.")
90
 
91
  content = create_yall()
92
  tab1, tab2 = st.tabs(["πŸ† Leaderboard", "πŸ“ About"])
 
137
  help="Number of likes on Hugging Face",
138
  format="%d ❀️",
139
  ),
140
+ "URL": st.column_config.LinkColumn("URL"),
141
  },
142
  hide_index=True,
143
  )
 
179
 
180
  ### Reproducibility
181
 
182
+ You can easily reproduce these results using 🧐 [LLM AutoEval](https://github.com/mlabonne/llm-autoeval/tree/master), a colab notebook that automates the evaluation process (benchmark: `nous`). This will upload the results to GitHub as gists. You can find the entire table with the links to the detailed results [here](https://gist.github.com/mlabonne/90294929a2dbcb8877f9696f28105fdf).
183
 
184
  ### Clone this space
185