from src.display_models.model_metadata_type import ModelType TITLE = """""" BOTTOM_LOGO = """""" INTRODUCTION_TEXT = f""" 🚀 The Open Ko-LLM Leaderboard 🇰🇷 objectively evaluates the performance of Korean large language model. When you submit a model on the "Submit here!" page, it is automatically evaluated. The GPU used for evaluation is operated with the support of KT. The data used for evaluation consists of datasets to assess expertise, inference ability, hallucination, and common sense. The evaluation dataset is exclusively private and only available for evaluation process. More detailed information about the benchmark dataset is provided on the “About” page. This leaderboard is co-hosted by __Upstage__ and __NIA__, and operated by __Upstage__. """ LLM_BENCHMARKS_TEXT = f""" # Context While outstanding LLM models are being released competitively, most of them are centered on English and are familiar with the English cultural sphere. We operate the Korean leaderboard, 🚀 Open Ko-LLM, to evaluate models that reflect the characteristics of the Korean language and Korean culture. Through this, we hope that users can conveniently use the leaderboard, participate, and contribute to the advancement of research in Korean. ## Icons {ModelType.PT.to_str(" : ")} model {ModelType.FT.to_str(" : ")} model {ModelType.IFT.to_str(" : ")} model {ModelType.RL.to_str(" : ")} model If there is no icon, it indicates that there is insufficient information about the model. Please provide information about the model through an issue! 🤩 🏴‍☠️ : This icon indicates that the model has been selected as a subject of caution by the community, implying that users should exercise restraint when using it. Clicking on the icon will take you to a discussion about that model. (Models that have used the evaluation set for training to achieve a high leaderboard ranking, among others, are selected as subjects of caution.) ## How it works 📈 We have set up a benchmark using datasets translated into Korean from the four tasks (HellaSwag, MMLU, Arc, Truthful QA) operated by HuggingFace OpenLLM. - Ko-HellaSwag (provided by __Upstage__) - Ko-MMLU (provided by __Upstage__) - Ko-Arc (provided by __Upstage__) - Ko-Truthful QA (provided by __Upstage__) To provide an evaluation befitting the LLM era, we've selected benchmark datasets suitable for assessing four elements: expertise, inference, hallucination, and common sense. The final score is converted to the average score from the four evaluation datasets. GPUs are provided by __KT__ for the evaluations. ## Details and Logs - Detailed numerical results in the `results` Upstage dataset: https://huggingface.co/datasets/open-ko-llm-leaderboard/results - community queries and running status in the `requests` Upstage dataset: https://huggingface.co/datasets/open-ko-llm-leaderboard/requests ## Reproducibility To reproduce our results, use [this version](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463) of dataset. ## More resources If you still have questions, you can check our FAQ [here](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard/discussions/1)! We also gather cool resources from the community, other teams, and other labs [here](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard/discussions/2)! """ EVALUATION_QUEUE_TEXT = f""" # Evaluation Queue for the 🚀 Open-Ko LLM Leaderboard Models added here will be automatically evaluated on the KT GPU cluster. ## ### 1️⃣ Make sure you can load your model and tokenizer using AutoClasses ``` from transformers import AutoConfig, AutoModel, AutoTokenizer config = AutoConfig.from_pretrained("your model name", revision=revision) model = AutoModel.from_pretrained("your model name", revision=revision) tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision) ``` If this step fails, follow the error messages to debug your model before submitting it. It's likely your model has been improperly uploaded. ⚠️ Make sure your model is public! ⚠️ If your model needs use_remote_code=True, we do not support this option yet but we are working on adding it, stay posted! ### 2️⃣ Convert your model weights to [safetensors](https://huggingface.co/docs/safetensors/index) It's a new format for storing weights which is safer and faster to load and use. It will also allow us to add the number of parameters of your model to the `Extended Viewer`! ### 3️⃣ Make sure your model has an open license! This is a leaderboard for 🚀 Open-Ko LLMs, and we'd love for as many people as possible to know they can use your model ### 4️⃣ Fill up your model card When we add extra information about models to the leaderboard, it will be automatically taken from the model card ## In case of model failure If your model is displayed in the `FAILED` category, its execution stopped. Make sure you have followed the above steps first. If everything is done, check you can launch the EleutherAIHarness on your model locally, using the above command without modifications (you can add `--limit` to limit the number of examples per task). """ CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results" CITATION_BUTTON_TEXT = r""" @misc{open-ko-llm-leaderboard, author = {Chanjun Park, Hwalsuk Lee, Hyunbyung Park, Sanghoon Kim, Seonghwan Cho, Sunghun Kim, Sukyung Lee}, title = {Open Ko LLM Leaderboard}, year = {2023}, publisher = {Upstage, National Information Society Agency}, howpublished = "\url{https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard/discussions/1}" } """