Model,Accuracy Qwen2-7B-Instruct,0.7615193026151931 Meta-Llama-3.1-8B-Instruct,0.5149439601494396 llama3-8b-cpt-sea-lionv2.1-instruct,0.50186799501868 Gemma-2-9b-it-sg-ultrachat-sft,0.5560398505603985 Qwen2_5_32B_Instruct,0.8262764632627646 Qwen2_5_7B_Instruct,0.7459526774595268 Qwen2_5_1_5B_Instruct,0.5971357409713575 Qwen2-72B-Instruct,0.8312577833125778 MERALiON-LLaMA-3-8B-Chat,0.4863013698630137 Meta-Llama-3-8B-Instruct,0.4775840597758406 Meta-Llama-3.1-70B-Instruct,0.6612702366127023 Qwen2_5_3B_Instruct,0.6537982565379825 SeaLLMs-v3-7B-Chat,0.7658779576587795 Qwen2_5_72B_Instruct,0.8325031133250311 gemma-2-9b-it,0.5523038605230386 Meta-Llama-3-70B-Instruct,0.6220423412204235 Qwen2_5_14B_Instruct,0.7839352428393525 gemma2-9b-cpt-sea-lionv3-instruct,0.5722291407222914 gemma-2-2b-it,0.4352428393524284 llama3-8b-cpt-sea-lionv2-instruct,0.49813200498132004 Qwen2_5_0_5B_Instruct,0.41718555417185554 GPT4o_0513,0.7073474470734745