Clémentine commited on
Commit
b38f281
1 Parent(s): 87fcfe5

added blog list

Browse files
Files changed (1) hide show
  1. contents.py +25 -2
contents.py CHANGED
@@ -1,3 +1,6 @@
 
 
 
1
  TITLE = """<h1 style="text-align:center;float:center; id="space-title">Leaderboards on the hub - Documentation</h1>"""
2
 
3
  IMAGE = "![Leaderboards on the Hub](https://raw.githubusercontent.com/huggingface/blog/main/assets/leaderboards-on-the-hub/thumbnail.png)"
@@ -54,13 +57,33 @@ A number of evaluations are very easy to cheat, accidentally or not: if a model
54
  Evaluations of closed source models are not always still accurate some time later: as closed source models are behind APIs, it is not possible to know how the model changes and what is added or removed through time (contrary to open source models, where relevant information is available). As such, you should not assume that a static evaluation of a closed source model at time t will still be valid some time later.
55
  """
56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  FINDING_PAGE = """
58
  # Finding the best leaderboard for your use case
59
 
60
  ## ✨ Featured leaderboards
61
 
62
- Since the end of 2023, we have worked with partners with strong evaluation knowledge, to highlight their work as a blog series, called `Leaderboards on the Hub`.
63
- You'll find it [here](https://huggingface.co/blog?tag=leaderboard).
 
 
 
64
 
65
  This series is particularly interesting to understand the subtelties of evaluation across different modalities and topics, and we hope it will act as a knowledge base in the future.
66
 
 
1
+ import urllib.request
2
+ import yaml
3
+
4
  TITLE = """<h1 style="text-align:center;float:center; id="space-title">Leaderboards on the hub - Documentation</h1>"""
5
 
6
  IMAGE = "![Leaderboards on the Hub](https://raw.githubusercontent.com/huggingface/blog/main/assets/leaderboards-on-the-hub/thumbnail.png)"
 
57
  Evaluations of closed source models are not always still accurate some time later: as closed source models are behind APIs, it is not possible to know how the model changes and what is added or removed through time (contrary to open source models, where relevant information is available). As such, you should not assume that a static evaluation of a closed source model at time t will still be valid some time later.
58
  """
59
 
60
+ # We extract the most recent blogs to display and embed them
61
+ blog_info = "https://raw.githubusercontent.com/huggingface/blog/main/_blog.yml"
62
+ with urllib.request.urlopen(blog_info) as f:
63
+ file = yaml.safe_load(f.read())
64
+ recent_blogs = blogs = [f for f in file[::-1] if "leaderboard" in f["tags"]][:5]
65
+
66
+ def return_blog_code(blogs_yaml):
67
+ # Not used at the moment, but could be improved if we wanted the images too
68
+ first_row = "|".join([f"[{blog['title']}](https://huggingface.co/blog/{blog['local']})" for blog in blogs_yaml])
69
+ second_row = "|".join([":---:" for _ in blogs_yaml])
70
+ third_row = "|".join([f" ![](https://huggingface.co{blog['thumbnail']}) " for blog in blogs_yaml])
71
+
72
+ return "\n\n|" + first_row + "|\n|" + second_row + "|\n|" + third_row + "|\n\n"
73
+
74
+ def return_blog_list(blogs_yaml):
75
+ return "\n- ".join([" "] + [f"[{blog['title']}](https://huggingface.co/blog/{blog['local']})" for blog in blogs_yaml])
76
+
77
  FINDING_PAGE = """
78
  # Finding the best leaderboard for your use case
79
 
80
  ## ✨ Featured leaderboards
81
 
82
+ Since the end of 2023, we have worked with partners with strong evaluation knowledge, to highlight their work as a blog series, called [`Leaderboards on the Hub`](https://huggingface.co/blog?tag=leaderboard).
83
+
84
+ Here are the most recent blogs we wrote together:
85
+ """ + return_blog_list(recent_blogs) + """
86
+
87
 
88
  This series is particularly interesting to understand the subtelties of evaluation across different modalities and topics, and we hope it will act as a knowledge base in the future.
89