VLSP 2023 - VLLM
AI & ML interests
None defined yet.
👉 Benchmark datasets here
VLSP 2023 VLLMs: Vietnamese Large Language Models
VLSP 2023 challenge on Vietnamese Large Language Models
To contact us, mail to: leanhcuong@gmail.com (Lê Anh Cường)
1. Important dates
- Aug 14, 2023: Registration open
- Sep 14, 2023: Registration close
- Sep 30, 2023: Release of test samples and evaluation instruction
- Nov 10, 2023: System submission deadline (API only)
- Nov 26, 2023: Technical report submission
- Dec 15-16, 2023: Result announcement - Workshop day
2. Task Description
In recent years, Large Language Models (LLMs) have gained widespread recognition and popularity worldwide, with models such as GPT-X, BARD and LLaMa making significant strides in natural language processing tasks. In Vietnam, there is also a growing interest in developing LLMs specifically tailored for the Vietnamese language. However, unlike LLMs developed for other languages, the availability of publicly accessible evaluation data for Vietnamese LLMs is significantly limited. The limited availability of evaluation data for Vietnamese LLMs presents a substantial obstacle for organizations seeking to establish uniform evaluation standards. The goal of VLSP2023-VLLMs is to promote the development of large language models for Vietnamese by constructing an evaluation dataset for VLLMs. This dataset will be different from conventional datasets for downstream NLP tasks, as it will focus on 4 primary abilities, divided into 8 skills and divided into 9 domains.
Abilities
- Logical thinking
- Logical Correctness
- Background knowledge
- Factuality
- Commonsense Understanding
- Problem handling
- Comprehension
- Insightfulness
- Metacognition
- User alignment
- Harmlessness
- Follow the Correct Instruction
Domains
- Humanities: Communication, Education
- Language: Poetry, Literature
- Social Science: Business, Finance, Law
- History: History
- Culture: Food, Sports, Art, Music
- Technology: Marketing, Electronics, Engineering
- Math: Mathematic, Logic
- Natural Science: Biology, Chemistry, Physics
- Health: Healthcare, Exercise, Nutrition
The teams participating in this challenge will build their own LLMs for Vietnamese, and these models will be provided with a public test dataset and instructions on how to evaluate them. The models participating in this competition remain the copyright of the respective development groups and are not required to be open-source. We will provide the following instructions to the participating groups:
- The publicly shared pre-trained LLMs.
- Plain text datasets for Vietnamese.
- Instruction datasets.
- Sample examples of the evaluation dataset.
3. Evaluation
Results would be evaluated by model-based evaluation and human-based evaluation.
4. Registration
👉 Shared Task Registration Form
5. Resources
We will provide the following instructions to the participating groups:
- The publicly shared pre-trained LLMs.
- Plain text datasets for Vietnamese.
- Instruction datasets.
- Sample examples of the evaluation dataset.
Note that the participating teams can use any resources to train their models.
Organizers
- Lê Anh Cường - Ton Duc Thang University
- Nguyễn Trọng Hiếu - Ton Duc Thang University
- Nguyễn Việt Cường - Intelligent Integration Co., Ltd. (INT2)
- Nguyễn Ngọc Quế - EDMICRO EDUCATION Co., Ltd.
- Le-Minh Nguyen - JAIST Japan Advanced Institute of Science and Technology
- Cam-Tu Nguyen - Artificial Intelligence School, Nanjing University, China
Sponsors
Intelligent Integration Co., Ltd. (INT2)Vietnamwww.int2.vn |
HPC SYSTEMS Inc.Japanwww.hpc.co.jp |
References
[1] Long Ouyang and Jeff Wu and Xu Jiang and Diogo Almeida and Carroll L. Wainwright and Pamela Mishkin and Chong Zhang and Sandhini Agarwal and Katarina Slama and Alex Ray, et al. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155, 2022.
[2] Seonghyeon Ye and Doyoung Kim and Sungdong Kim and Hyeonbin Hwang and Seungone Kim and Yongrae Jo and James Thorne and Juho Kim and Minjoon Seo. FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets. arXiv preprint arXiv:2307.10928, 2023.
Collections
4
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11 -
Learning to summarize from human feedback
Paper • 2009.01325 • Published • 4 -
Training language models to follow instructions with human feedback
Paper • 2203.02155 • Published • 16