41ow1ives commited on
Commit
1220a69
1 Parent(s): 8a7d698

Update src/assets/text_content.py

Browse files
Files changed (1) hide show
  1. src/assets/text_content.py +3 -3
src/assets/text_content.py CHANGED
@@ -6,7 +6,7 @@ BOTTOM_LOGO = """<img src="https://upstage-open-ko-llm-leaderboard-logos.s3.ap-n
6
  INTRODUCTION_TEXT = f"""
7
  🚀 The Open Ko-LLM Leaderboard 🇰🇷 objectively evaluates the performance of Korean Large Language Model (LLM).
8
 
9
- When you submit a model on the "Submit here!" page, it is automatically evaluated. The GPU used for evaluation is operated with the support of KT.
10
  The data used for evaluation consists of datasets to assess reasoning, language understanding, hallucination, and commonsense.
11
  The evaluation dataset is exclusively private and only available for evaluation process.
12
  More detailed information about the benchmark dataset is provided on the “About” page.
@@ -58,7 +58,7 @@ Models added here will be automatically evaluated on the KT GPU cluster.
58
  ## <Some good practices before submitting a model>
59
 
60
  ### 1️⃣ Make sure you can load your model and tokenizer using AutoClasses
61
- ```
62
  from transformers import AutoConfig, AutoModel, AutoTokenizer
63
  config = AutoConfig.from_pretrained("your model name", revision=revision)
64
  model = AutoModel.from_pretrained("your model name", revision=revision)
@@ -71,7 +71,7 @@ If this step fails, follow the error messages to debug your model before submitt
71
 
72
  ⚠️ Maker sure your model runs with [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness)
73
 
74
- ⚠️ If your model needs use_remote_code=True, we do not support this option yet but we are working on adding it, stay posted!
75
 
76
  ### 2️⃣ Convert your model weights to [safetensors](https://huggingface.co/docs/safetensors/index)
77
  It's a new format for storing weights which is safer and faster to load and use. It will also allow us to add the number of parameters of your model to the `Extended Viewer`!
 
6
  INTRODUCTION_TEXT = f"""
7
  🚀 The Open Ko-LLM Leaderboard 🇰🇷 objectively evaluates the performance of Korean Large Language Model (LLM).
8
 
9
+ When you submit a model on the "Submit here!" page, it is automatically evaluated. The GPU used for evaluation is operated with the support of __[KT](https://cloud.kt.com/)__.
10
  The data used for evaluation consists of datasets to assess reasoning, language understanding, hallucination, and commonsense.
11
  The evaluation dataset is exclusively private and only available for evaluation process.
12
  More detailed information about the benchmark dataset is provided on the “About” page.
 
58
  ## <Some good practices before submitting a model>
59
 
60
  ### 1️⃣ Make sure you can load your model and tokenizer using AutoClasses
61
+ ```python
62
  from transformers import AutoConfig, AutoModel, AutoTokenizer
63
  config = AutoConfig.from_pretrained("your model name", revision=revision)
64
  model = AutoModel.from_pretrained("your model name", revision=revision)
 
71
 
72
  ⚠️ Maker sure your model runs with [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness)
73
 
74
+ ⚠️ If your model needs `use_remote_code=True`, we do not support this option yet but we are working on adding it, stay posted!
75
 
76
  ### 2️⃣ Convert your model weights to [safetensors](https://huggingface.co/docs/safetensors/index)
77
  It's a new format for storing weights which is safer and faster to load and use. It will also allow us to add the number of parameters of your model to the `Extended Viewer`!