|
--- |
|
language: |
|
- en |
|
- zh |
|
- id |
|
- th |
|
- vi |
|
- ms |
|
- lo |
|
datasets: |
|
- cerebras/SlimPajama-627B |
|
- Skywork/SkyPile-150B |
|
- allenai/MADLAD-400 |
|
- cc100 |
|
tags: |
|
- multilingual |
|
- sea |
|
- sailor |
|
license: apache-2.0 |
|
base_model: Qwen/Qwen1.5-7B |
|
model-index: |
|
- name: Sailor-7B |
|
results: |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: XQuAD-Thai |
|
type: XQuAD-Thai |
|
metrics: |
|
- name: EM (3-Shot) |
|
type: EM (3-Shot) |
|
value: 57.88 |
|
- name: F1 (3-Shot) |
|
type: F1 (3-Shot) |
|
value: 71.06 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: TyDiQA-Indonesian |
|
type: TyDiQA-Indonesian |
|
metrics: |
|
- name: EM (3-Shot) |
|
type: EM (3-Shot) |
|
value: 60.53 |
|
- name: F1 (3-Shot) |
|
type: F1 (3-Shot) |
|
value: 75.42 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: XQuAD-Vietnamese |
|
type: XQuAD-Vietnamese |
|
metrics: |
|
- name: EM (3-Shot) |
|
type: EM (3-Shot) |
|
value: 53.81 |
|
- name: F1 (3-Shot) |
|
type: F1 (3-Shot) |
|
value: 74.62 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: XCOPA-Thai |
|
type: XCOPA-Thai |
|
metrics: |
|
- name: EM (3-Shot) |
|
type: EM (3-Shot) |
|
value: 59.00 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: XCOPA-Indonesian |
|
type: XCOPA-Indonesian |
|
metrics: |
|
- name: EM (3-Shot) |
|
type: EM (3-Shot) |
|
value: 72.20 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: XCOPA-Vietnamese |
|
type: XCOPA-Vietnamese |
|
metrics: |
|
- name: EM (3-Shot) |
|
type: EM (3-Shot) |
|
value: 72.20 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: M3Exam-Thai |
|
type: M3Exam-Thai |
|
metrics: |
|
- name: EM (3-Shot) |
|
type: EM (3-Shot) |
|
value: 30.00 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: M3Exam-Indonesian |
|
type: M3Exam-Indonesian |
|
metrics: |
|
- name: EM (3-Shot) |
|
type: EM (3-Shot) |
|
value: 32.88 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: M3Exam-Vietnamese |
|
type: M3Exam-Vietnamese |
|
metrics: |
|
- name: EM (3-Shot) |
|
type: EM (3-Shot) |
|
value: 44.10 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: BELEBELE-Thai |
|
type: BELEBELE-Thai |
|
metrics: |
|
- name: EM (3-Shot) |
|
type: EM (3-Shot) |
|
value: 41.56 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: BELEBELE-Indonesian |
|
type: BELEBELE-Indonesian |
|
metrics: |
|
- name: EM (3-Shot) |
|
type: EM (3-Shot) |
|
value: 44.33 |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: BELEBELE-Vietnamese |
|
type: BELEBELE-Vietnamese |
|
metrics: |
|
- name: EM (3-Shot) |
|
type: EM (3-Shot) |
|
value: 45.33 |
|
--- |
|
|
|
<div align="center"> |
|
<img src="banner_sailor.jpg" width="700"/> |
|
</div> |
|
|
|
Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao. |
|
Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region. |
|
Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 14B versions for different requirements. |
|
We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat. |
|
Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages. |
|
|
|
> The logo was generated by MidJourney |
|
|
|
## Model Summary |
|
- **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825) |
|
- **Project Website:** [sailorllm.github.io](https://sailorllm.github.io/) |
|
- **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm) |
|
- **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf) |
|
|
|
|
|
## Training details |
|
Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages. |
|
The pre-training corpus heavily leverages the publicly available corpus, including |
|
[SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B), |
|
[SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B), |
|
[CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400). |
|
|
|
By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages. |
|
Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes. |
|
The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise. |
|
Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models. |
|
|
|
## Requirements |
|
The code of Sailor has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`. |
|
|
|
## Quickstart |
|
|
|
Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents. |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
device = "cuda" # the device to load the model |
|
|
|
model = AutoModelForCausalLM.from_pretrained("sail/Sailor-7B", device_map="auto") |
|
tokenizer = AutoTokenizer.from_pretrained("sail/Sailor-7B") |
|
|
|
input_message = "Model bahasa adalah model probabilistik" |
|
### The given Indonesian input translates to 'A language model is a probabilistic model of.' |
|
|
|
model_inputs = tokenizer([input_message], return_tensors="pt").to(device) |
|
|
|
generated_ids = model.generate( |
|
model_inputs.input_ids, |
|
max_new_tokens=64 |
|
) |
|
|
|
generated_ids = [ |
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) |
|
] |
|
|
|
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] |
|
print(response) |
|
``` |
|
|
|
# License |
|
|
|
Sailor is distributed under the terms of the Apache License 2.0. |
|
No restrict on the research and the commercial use, but you should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-7B/blob/main/LICENSE), which means that you shall request a license from Qwen team if your product or service has more than 100 million monthly active users in your commerical scenarios, otherwise no need for further request. |
|
|
|
|
|
## Citation |
|
|
|
If you find sailor useful, please cite our work as follows: |
|
|
|
``` |
|
@article{dou2024sailor, |
|
title={Sailor: Open Language Models for South-East Asia}, |
|
author={Dou, Longxu and Liu, Qian and Zeng, Guangtao and Guo, Jia and Zhou, Jiahui and Lu, Wei and Lin, Min}, |
|
journal={arXiv preprint arXiv:2404.03608}, |
|
year={2024} |
|
} |
|
``` |
|
|
|
# Contact Us |
|
|
|
If you have any questions, please raise an issue or contact us at [doulx@sea.com](mailto:doulx@sea.com) or [liuqian@sea.com](mailto:liuqian@sea.com). |