Edit model card

SeaLLM-7B-v2 - Large Language Models for Southeast Asia

๐Ÿค— Tech Memo    ๐Ÿค— DEMO    Github    Technical Report

We introduce SeaLLM-7B-v2, the state-of-the-art multilingual LLM for Southeast Asian (SEA) languages ๐Ÿ‡ฌ๐Ÿ‡ง ๐Ÿ‡จ๐Ÿ‡ณ ๐Ÿ‡ป๐Ÿ‡ณ ๐Ÿ‡ฎ๐Ÿ‡ฉ ๐Ÿ‡น๐Ÿ‡ญ ๐Ÿ‡ฒ๐Ÿ‡พ ๐Ÿ‡ฐ๐Ÿ‡ญ ๐Ÿ‡ฑ๐Ÿ‡ฆ ๐Ÿ‡ฒ๐Ÿ‡ฒ ๐Ÿ‡ต๐Ÿ‡ญ. It is the most significant upgrade since SeaLLM-13B, with half the size, outperforming performance across diverse multilingual tasks, from world knowledge, math reasoning, instruction following, etc.

Highlights

  • SeaLLM-7B-v2 achieves the 7B-SOTA on the GSM8K task with 78.2 score and outperforms GPT-3.5 in many GSM8K-translated tasks in SEA languages (๐Ÿ‡จ๐Ÿ‡ณ ๐Ÿ‡ป๐Ÿ‡ณ ๐Ÿ‡ฎ๐Ÿ‡ฉ ๐Ÿ‡น๐Ÿ‡ญ) as well as MGSM (๐Ÿ‡จ๐Ÿ‡ณ ๐Ÿ‡น๐Ÿ‡ญ). It also surpasses GPT-3.5 in MATH for Thai ๐Ÿ‡น๐Ÿ‡ญ.
  • It scores competitively against GPT-3.5 in many zero-shot commonsense benchmark, with 82.5, 68.3, 80.9 scores on Arc-C, Winogrande, and Hellaswag.
  • It achieves 7.54 score on the ๐Ÿ‡ฌ๐Ÿ‡ง MT-bench, it ranks 3rd place on the leaderboard for 7B category and is the most outperforming multilingual model.
  • It scores 45.46 on the VMLU benchmark for Vietnamese ๐Ÿ‡ป๐Ÿ‡ณ, and is the only open-source multilingual model that can be competitive to monolingual models (Vistral-7B) of similar sizes.

Release and DEMO

Terms of Use and License: By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our SeaLLMs Terms Of Use.

Disclaimer: We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation. Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos.

The logo was generated by DALL-E 3.

What's new since SeaLLM-13B-v1 and SeaLLM-7B-v1?

  • SeaLLM-7B-v2 is continue-pretrained from Mistral-7B and underwent carefully designed tuning with focus in reasoning.

Evaluation

Zero-shot Multilingual Math Reasoning

SeaLLM-7B-v2 achieves with 78.2 score on the GSM8K, making it the state of the art in the realm of 7B models. It also outperforms GPT-3.5 in the same GSM8K benchmark as translated into SEA languages (๐Ÿ‡จ๐Ÿ‡ณ ๐Ÿ‡ป๐Ÿ‡ณ ๐Ÿ‡ฎ๐Ÿ‡ฉ ๐Ÿ‡น๐Ÿ‡ญ). SeaLLM-7B-v2 also surpasses GPT-3.5 on the Thai-translated MATH benchmark, with 22.4 vs 18.1 scores.

fig_sea_math_side_by_side.png

See details on English and translated GSM8K and MATH
Model GSM8K
en
MATH
en
GSM8K
zh
MATH
zh
GSM8K
vi
MATH
vi
GSM8K
id
MATH
id
GSM8K
th
MATH
th
GPT-3.5 80.8 34.1 48.2 21.5 55 26.5 64.3 26.4 35.8 18.1
Qwen-14B-chat 61.4 18.4 41.6 11.8 33.6 3.6 44.7 8.6 22 6
Vistral-7b-chat 48.2 12.5 48.7 3.1
SeaLLM-7B-v2 78.2 27.5 53.7 17.6 69.9 23.8 71.5 24.4 59.6 22.4

Zero-shot MGSM

SeaLLM-7B-v2 also outperforms GPT-3.5 and Qwen-14B on the multilingual MGSM for Zh and Th.

Model MGSM-Zh MGSM-Th
ChatGPT (reported) 61.2* 47.2*
Qwen-14B-chat 59.6 28
SeaLLM-7B-v2 64.8 62.4

Zero-shot Commonsense Reasoning

We compare SeaLLM-7B-v2 with ChatGPT and Mistral-7B-instruct on various zero-shot commonsense benchmarks (Arc-Challenge, Winogrande and Hellaswag). We use the 2-stage technique in (Kojima et al., 2023) to grab the answer. Note that we DID NOT use "Let's think step-by-step" to invoke explicit CoT.

Model Arc-Challenge Winogrande Hellaswag
ChatGPT (reported) 84.6* 66.8* 72.0*
ChatGPT (reproduced) 84.1 63.1 79.5
Mistral-7B-Instruct 68.1 56.4 45.6
SeaLLM-7B-v2 82.5 68.3 80.9

Multilingual World Knowledge

We evaluate models on 3 benchmarks following the recommended default setups: 5-shot MMLU for En, 3-shot M3Exam (M3e) for En, Zh, Vi, Id, Th, and zero-shot VMLU for Vi.

Model Langs En
MMLU
En
M3e
Zh
M3e
Vi
M3e
Vi
VMLU
Id
M3e
Th
M3e
ChatGPT Multi 68.90 75.46 60.20 58.64 46.32 49.27 37.41
----- ----- --- -- ----- ---- --- --- ---
SeaLLM-13B Multi 52.78 62.69 44.50 46.45 39.28 36.39
Vistral-7B Mono 56.86 67.00 44.56 54.33 50.03 36.49 25.27
SeaLLM-7B-v2 Multi 60.72 70.91 55.43 51.15 45.46 42.25 35.52

MT-Bench

On the English MT-bench metric, SeaLLM-7B-v2 achieves 7.54 score on the MT-bench (3rd place on the leaderboard for 7B category), outperforms many 70B models and is arguably the only one that handles 10 SEA languages.

Refer to mt_bench/seallm_7b_v2.jsonl for the MT-bench predictions of SeaLLM-7B-v2.

Model Access Langs MT-Bench
GPT-4-turbo closed multi 9.32
GPT-4-0613 closed multi 9.18
Mixtral-8x7b (46B) open multi 8.3
Starling-LM-7B-alpha open mono (en) 8.0
OpenChat-3.5-7B open mono (en) 7.81
SeaLLM-7B-v2 open multi (10+) 7.54
Qwen-14B open multi 6.96
Llama-2-70B open mono (en) 6.86
Mistral-7B-instuct open mono (en) 6.84

Sea-Bench

Similar to MT-Bench, Sea-bench is a set of categorized instruction test sets to measure models' ability as an assistant that is specifically focused on 9 SEA languages, including non-Latin low-resource languages.

As shown, the huge improvements come from math-reasoning, reaching GPT-3.5 level of performance.

fig_sea_bench_side_by_side.png

Refer to sea_bench/seallm_7b_v2.jsonl for the Sea-bench predictions of SeaLLM-7B-v2.

Usage

Instruction format

prompt = """<|im_start|>system
You are a helpful assistant.</s>
<|im_start|>user
Hello world</s>
<|im_start|>assistant
Hi there, how can I help?</s>

# ! ENSURE 1 and only 1 bos `<s>` at the beginning of sequence
print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)))

['<s>', 'โ–<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', 'โ–are', 'โ–a', 'โ–helpful', 'โ–assistant', '.', '</s>', 'โ–', '<0x0A>', '<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', 'โ–world', '</s>', 'โ–', '<0x0A>', '<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', 'โ–there', ',', 'โ–how', 'โ–can', 'โ–I', 'โ–help', '?', '</s>', 'โ–', '<0x0A>']
"""

Using transformers's chat_template


from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("SeaLLMs/SeaLLM-7B-v2", torch_dtype=torch.bfloat16, device_map=device)
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM-7B-v2")

messages = [
    {"role": "user", "content": "Hello world"},
    {"role": "assistant", "content": "Hi there, how can I help you today?"},
    {"role": "user", "content": "Explain general relativity in details."}
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
print(tokenizer.convert_ids_to_tokens(encodeds[0]))
# ['<s>', 'โ–<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', 'โ–world', '</s>', 'โ–', '<0x0A>', '<', '|', 'im ....

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.pad_token_id)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])

Using vLLM

from vllm import LLM, SamplingParams
TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>"
TURN_PREFIX = "<|im_start|>{role}\n"

def seallm_chat_convo_format(conversations, add_assistant_prefix: bool, system_prompt=None):
    # conversations: list of dict with key `role` and `content` (openai format)
    if conversations[0]['role'] != 'system' and system_prompt is not None:
        conversations = [{"role": "system", "content": system_prompt}] + conversations
    text = ''
    for turn_id, turn in enumerate(conversations):
        prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content'])
        text += prompt
    if add_assistant_prefix:
        prompt = TURN_PREFIX.format(role='assistant')
        text += prompt    
    return text

sparams = SamplingParams(temperature=0.1, max_tokens=1024, stop=['</s>', '<|im_start|>'])
llm = LLM("SeaLLMs/SeaLLM-7B-v2", dtype="bfloat16")

message = "Explain general relativity in details."
prompt = seallm_chat_convo_format(message, True)
gen = llm.generate(prompt, sampling_params)

print(gen[0].outputs[0].text)

Acknowledgement to Our Linguists

We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety.

Citation

If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: l.bing@alibaba-inc.com

Author list and order will change!

  • * and ^ are equal contributions.
@article{damonlpsg2023seallm,
  author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*,
            Zhiqiang Hu, Chenhui Shen^, Yew Ken Chia^, Xingxuan Li, Jianyu Wang,
            Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang,
            Chaoqun Liu, Hang Zhang, Lidong Bing},
  title = {SeaLLMs - Large Language Models for Southeast Asia},
  year = 2023,
  Eprint = {arXiv:2312.00738},
}
Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.