|
--- |
|
language: |
|
- en |
|
- zh |
|
- id |
|
- th |
|
- vi |
|
- ms |
|
- lo |
|
datasets: |
|
- cerebras/SlimPajama-627B |
|
- Skywork/SkyPile-150B |
|
- allenai/MADLAD-400 |
|
- cc100 |
|
- CohereForAI/aya_dataset |
|
- CohereForAI/aya_collection |
|
- Open-Orca/OpenOrca |
|
tags: |
|
- multilingual |
|
- sea |
|
- sailor |
|
- sft |
|
- chat |
|
- instruction |
|
- gguf |
|
license: apache-2.0 |
|
base_model: sail/Sailor-4B |
|
--- |
|
|
|
<div align="center"> |
|
<img src="banner_sailor.jpg" width="700"/> |
|
</div> |
|
|
|
Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao. |
|
Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region. |
|
Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 7B versions for different requirements. |
|
We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat. |
|
Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages. |
|
|
|
> The logo was generated by MidJourney |
|
|
|
## Model Summary |
|
- **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825) |
|
- **Project Website:** [sailorllm.github.io](https://sailorllm.github.io/) |
|
- **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm) |
|
- **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf) |
|
|
|
|
|
## Training details |
|
Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages. |
|
The pre-training corpus heavily leverages the publicly available corpus, including |
|
[SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B), |
|
[SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B), |
|
[CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400). |
|
The instruction tuning corpus are all publicly available including |
|
[aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection), |
|
[aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset), |
|
[OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca). |
|
|
|
By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages. |
|
Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes. |
|
The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise. |
|
Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models. |
|
|
|
### GGUF model list |
|
| Name | Quant method | Bits | Size | Use case | |
|
| ------------------------------------------------------------ | ------------ | ---- | ------- | ------------------------------------------------------------ | |
|
| [ggml-model-Q2_K.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q2_K.gguf) | Q2_K | 2 | 1.62 GB | small, significant quality loss ❗️ not recommended for most purposes | |
|
| [ggml-model-Q3_K_L.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q3_K_L.gguf) | Q3_K_L | 3 | 2.17 GB | medium, substantial quality loss | |
|
| [ggml-model-Q3_K_M.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q3_K_M.gguf) | Q3_K_M | 3 | 2.03 GB | medium, balanced quality | |
|
| [ggml-model-Q3_K_S.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q3_K_S.gguf) | Q3_K_S | 3 | 1.86 GB | small, high quality loss | |
|
| [ggml-model-Q4_K_M.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q4_K_M.gguf) | Q4_K_M | 4 | 2.46 GB | medium, balanced quality | |
|
| [ggml-model-Q4_K_S.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q4_K_S.gguf) | Q4_K_S | 4 | 2.34 GB | medium, greater quality loss | |
|
| [ggml-model-Q5_K_M.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q5_K_M.gguf) | Q5_K_M | 5 | 2.84 GB | medium, balanced quality | |
|
| [ggml-model-Q5_K_S.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q5_K_S.gguf) | Q5_K_S | 5 | 2.78 GB | medium, very low quality loss | |
|
| [ggml-model-Q6_K.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q6_K.gguf) | Q6_K | 6 | 3.25 GB | medium, extremely low quality loss | |
|
| [ggml-model-Q8_0.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q8_0.gguf) | Q8_0 | 8 | 4.2 GB | large, extremely low quality loss | |
|
| [ggml-model-f16.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-f16.gguf) | f16 | 16 | 7.91 GB | very large, no quality loss | |
|
|
|
### How to run with `llama.cpp` |
|
|
|
```shell |
|
# install llama.cpp |
|
git clone https://github.com/ggerganov/llama.cpp.git |
|
cd llama.cpp |
|
make |
|
pip install -r requirements.txt |
|
|
|
# generate with llama.cpp |
|
./main -ngl 40 -m ggml-model-Q4_K_M.gguf -p "<|im_start|>question\nCara memanggang ikan?\n<|im_start|>answer\n" --temp 0.7 --repeat_penalty 1.1 -n 400 -e |
|
``` |
|
|
|
> Change `-ngl 40` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. |
|
|
|
### How to run with `llama-cpp-python` |
|
|
|
```shell |
|
pip install llama-cpp-python |
|
``` |
|
|
|
```python |
|
import llama_cpp |
|
import llama_cpp.llama_tokenizer |
|
|
|
# load model |
|
llama = llama_cpp.Llama.from_pretrained( |
|
repo_id="sail/Sailor-4B-Chat-gguf", |
|
filename="ggml-model-Q4_K_M.gguf", |
|
tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained("sail/Sailor-4B-Chat"), |
|
n_gpu_layers=40, |
|
n_threads=8, |
|
verbose=False, |
|
) |
|
|
|
system_role= 'system' |
|
user_role = 'question' |
|
assistant_role = "answer" |
|
|
|
system_prompt= \ |
|
'You are an AI assistant named Sailor created by Sea AI Lab. \ |
|
Your answer should be friendly, unbiased, faithful, informative and detailed.' |
|
system_prompt = f"<|im_start|>{system_role}\n{system_prompt}<|im_end|>" |
|
|
|
# inference example |
|
output = llama( |
|
system_prompt + '\n' + f"<|im_start|>{user_role}\nCara memanggang ikan?\n<|im_start|>{assistant_role}\n", |
|
max_tokens=256, |
|
temperature=0.7, |
|
top_p=0.75, |
|
top_k=60, |
|
stop=["<|im_end|>", "<|endoftext|>"] |
|
) |
|
|
|
print(output['choices'][0]['text']) |
|
``` |
|
### How to build demo |
|
|
|
Install `llama-cpp-python` and `gradio`, then run [script](https://github.com/sail-sg/sailor-llm/blob/main/demo/llamacpp_demo.py). |
|
|
|
# License |
|
|
|
Sailor is distributed under the terms of the Apache License 2.0. |
|
No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE). |
|
|
|
## Citation |
|
|
|
If you find sailor useful, please cite our work as follows: |
|
|
|
``` |
|
@misc{dou2024sailor, |
|
title={Sailor: Open Language Models for South-East Asia}, |
|
author={Longxu Dou and Qian Liu and Guangtao Zeng and Jia Guo and Jiahui Zhou and Wei Lu and Min Lin}, |
|
year={2024}, |
|
eprint={2404.03608}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|
|
# Contact Us |
|
|
|
If you have any questions, please raise an issue or contact us at [doulx@sea.com](mailto:doulx@sea.com) or [liuqian@sea.com](mailto:liuqian@sea.com). |