--- license: mit license_name: mit task_categories: - question-answering language: - zh tags: - traditional chinese - finance - medical - taiwan - benchmark - zh-tw - zh-hant pretty_name: tmmlu++ size_categories: - 100K A close-up image of a neat paper note with a white background. The text 'TMMLU+' is written horizontally across the center of the note in bold, black. Join us to work in multimodal LLM : https://ikala.ai/recruit/

We present TMMLU+, a traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset featuring 66 subjects, ranging from elementary to professional level. The TMMLU+ dataset is six times larger and contains more balanced subjects compared to its predecessor, [TMMLU](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval/data/TMMLU). We have included benchmark results in TMMLU+ from closed-source models and 20 open-weight Chinese large language models, with parameters ranging from 1.8B to 72B. The benchmark results show that Traditional Chinese variants still lag behind those trained on major Simplified Chinese models. ```python from datasets import load_dataset task_list = [ 'engineering_math', 'dentistry', 'traditional_chinese_medicine_clinical_medicine', 'clinical_psychology', 'technical', 'culinary_skills', 'mechanical', 'logic_reasoning', 'real_estate', 'general_principles_of_law', 'finance_banking', 'anti_money_laundering', 'ttqav2', 'marketing_management', 'business_management', 'organic_chemistry', 'advance_chemistry', 'physics', 'secondary_physics', 'human_behavior', 'national_protection', 'jce_humanities', 'politic_science', 'agriculture', 'official_document_management', 'financial_analysis', 'pharmacy', 'educational_psychology', 'statistics_and_machine_learning', 'management_accounting', 'introduction_to_law', 'computer_science', 'veterinary_pathology', 'accounting', 'fire_science', 'optometry', 'insurance_studies', 'pharmacology', 'taxation', 'trust_practice', 'geography_of_taiwan', 'physical_education', 'auditing', 'administrative_law', 'education_(profession_level)', 'economics', 'veterinary_pharmacology', 'nautical_science', 'occupational_therapy_for_psychological_disorders', 'basic_medical_science', 'macroeconomics', 'trade', 'chinese_language_and_literature', 'tve_design', 'junior_science_exam', 'junior_math_exam', 'junior_chinese_exam', 'junior_social_studies', 'tve_mathematics', 'tve_chinese_language', 'tve_natural_sciences', 'junior_chemistry', 'music', 'education', 'three_principles_of_people', 'taiwanese_hokkien' ] for task in task_list: val = load_dataset('ikala/tmmluplus', task)['validation'] dev = load_dataset('ikala/tmmluplus', task)['train'] test = load_dataset('ikala/tmmluplus', task)['test'] ``` For each dataset split ```python for row in test: print(row) break >> Dataset({ features: ['question', 'A', 'B', 'C', 'D', 'answer'], num_rows: 11 }) ``` Statistic on all four categories : STEM, Social Science, Humanities, Other | Category | Test | Dev | Validation | |----------------------------------|-------|------|------------| | STEM | 3458 | 70 | 385 | | Social Sciences | 5958 | 90 | 665 | | Humanities | 1763 | 35 | 197 | | Other (Business, Health, Misc.) | 8939 | 135 | 995 | | **Total** | 20118 | 330 | 2242 | ## Benchmark on direct prompting | model | STEM | Social Science | Humanities | Other | Average | |------------|------------|------------|------------|------------|------------| | [Qwen/Qwen-72B](https://huggingface.co/Qwen/Qwen-72B) | 61.12 | 71.65 | 63.00 | 61.31 |64.27| | gpt-4-0613 | 60.36 | 67.36 | 56.03 | 57.62 |60.34| | Qwen-max | 59.92 | 66.95 | 57.43 | 56.48 |60.20| | [Qwen/Qwen-72B-Chat](https://huggingface.co/Qwen/Qwen-72B-Chat) | 55.15 | 66.20 | 55.65 | 57.19 |58.55| | [Qwen/Qwen-14B](https://huggingface.co/Qwen/Qwen-14B) | 46.94 | 56.69 | 49.43 | 48.81 |50.47| | Gemini-pro | 45.38 | 57.29 | 48.80 | 48.21 |49.92| | [01-ai/Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 40.24 | 56.77 | 53.99 | 47.58 |49.64| | [Reka Flash](https://www.reka.ai/)|45.26|52.91|46.31|43.76|47.06| | [Qwen/Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat) | 43.86 | 53.29 | 44.78 | 45.13 |46.77| | [01-ai/Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) | 39.62 | 50.24 | 44.44 | 44.26 |44.64| | Claude-1.3 | 42.65 | 49.33 | 42.16 | 44.14 |44.57| | [MediaTek-Research/Breeze-7B-Instruct-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1)| 36.46 | 48.38 |45.11 |40.75 | 42.67 | | gpt-3.5-turbo-0613 | 41.56 | 46.72 | 36.73 | 42.03 |41.76| | [CausalLM/14B](https://huggingface.co/CausalLM/14B) | 39.83 | 44.50 | 39.61 | 41.97 |41.48| | [Skywork/Skywork-13B-base](https://huggingface.co/Skywork/Skywork-13B-base) | 36.93 | 47.27 | 41.04 | 40.10 |41.33| | [Qwen/Qwen-7B](https://huggingface.co/Qwen/Qwen-7B) | 37.53 | 45.48 | 38.09 | 38.96 |40.01| | [meta-llama/Llama-3-70b-chat-hf](https://docs.together.ai/docs/inference-models) | 34.44 | 47.02 | 37.50 |39.51 | 39.62 | | [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) | 33.32 | 44.64 | 40.27 | 39.89 |39.53| | [vivo-ai/BlueLM-7B-Base](https://huggingface.co/vivo-ai/BlueLM-7B-Base) | 33.94 | 41.52 | 37.38 | 38.74 |37.90| | [baichuan-inc/Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | 29.64 | 43.73 | 37.36 | 39.88 |37.65| | [Qwen/Qwen-1_8B](https://huggingface.co/Qwen/Qwen-1_8B) | 32.65 | 38.95 | 38.34 | 35.27 |36.30| | Claude-2 | 39.65 | 39.09 | 28.59 | 37.47 |36.20| | [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) | 31.05 | 39.31 | 35.64 | 35.60 |35.40| | [deepseek-ai/deepseek-llm-7b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-7b-chat) | 29.82 | 42.29 | 34.24 | 34.31 |35.17| | [CausalLM/7B](https://huggingface.co/CausalLM/7B) | 31.03 | 38.17 | 35.87 | 35.39 |35.11| | [Azure99/blossom-v3_1-mistral-7b](https://huggingface.co/Azure99/blossom-v3_1-mistral-7b) | 32.80 | 36.91 | 32.36 | 34.53 |34.15| | [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 31.89 | 35.70 | 34.00 | 33.79 | 33.84 | | [Reka Edge](https://www.reka.ai/)|30.02|39.40|31.84|32.36|33.41| | [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) | 24.69 | 39.18 | 33.60 | 31.99 |32.37| | [Qwen/Qwen-1_8B-Chat](https://huggingface.co/Qwen/Qwen-1_8B-Chat) | 26.60 | 36.36 | 31.81 | 31.96 |31.68| | [meta-llama/Llama-3-8b-chat-hf](https://docs.together.ai/docs/inference-models) | 31.52 | 34.19 | 28.91 | 31.79 | 31.60 | | [TigerResearch/tigerbot-13b-chat-v3](https://huggingface.co/TigerResearch/tigerbot-13b-chat-v3) | 24.73 | 29.63 | 25.72 | 27.22 |26.82| | [hongyin/mistral-7b-80k](https://huggingface.co/hongyin/mistral-7b-80k) | 24.26 | 23.76 | 22.56 | 24.57 |23.79| | [deepseek-ai/deepseek-llm-67b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat) | 19.10 | 26.06 | 21.51 | 21.77 |22.11| | [yentinglin/Taiwan-LLM-13B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat) | 18.53 | 27.65 | 17.77 | 21.49 |21.36| | [GeneZC/MiniChat-3B](https://huggingface.co/GeneZC/MiniChat-3B) | 17.66 | 23.35 | 22.71 | 20.34 |21.02| | [LinkSoul/Chinese-Llama-2-7b](https://huggingface.co/LinkSoul/Chinese-Llama-2-7b) | 16.55 | 18.39 | 12.97 | 16.13 |16.01| | [yentinglin/Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 14.99 | 16.23 | 15.00 | 16.22 |15.61| | Claude-instant-1 | 12.52 | 17.13 | 15.10 | 13.57 |14.58| | [FlagAlpha/Atom-7B](https://huggingface.co/FlagAlpha/Atom-7B) | 5.60 | 13.57 | 7.71 | 11.84 |9.68| Results via [ievals](https://github.com/iKala/ievals) ( settings : 0-shot direct answering ) # Citation ``` @article{ikala2024improved, title={An Improved Traditional Chinese Evaluation Suite for Foundation Model}, author={Tam, Zhi-Rui and Pai, Ya-Ting and Lee, Yen-Wei and Cheng, Sega and Shuai, Hong-Han}, journal={arXiv preprint arXiv:2403.01858}, year={2024} } ```