SEA-LION-7B-Instruct-GPTQ

SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region. The sizes of the models range from 3 billion to 7 billion parameters.

SEA-LION-7B-Instruct is a multilingual model which has been fine-tuned with thousands of English and Indonesian instruction-completion pairs alongside a smaller pool of instruction-completion pairs from other ASEAN languages. These instructions have been carefully curated and rewritten to ensure the model was trained on truly open, commercially permissive and high quality datasets.

SEA-LION-7B-Instruct-GPTQ is the quantized version of the SEA-LION-7B-Instruct model using a modified version of the AutoGPTQ library with Wikipedia texts.

SEA-LION stands for Southeast Asian Languages In One Network.

  • Developed by: Products Pillar, AI Singapore
  • Funded by: Singapore NRF
  • Model type: Decoder
  • Languages: English, Chinese, Indonesian, Malay, Thai, Vietnamese, Filipino, Tamil, Burmese, Khmer, Lao
  • License: MIT License

Model Details

Base model

SEA-LION-7B-Instruct-GPTQ is quantized from SEA-LION-7B-Instruct.

Benchmark Performance

Model ARC HellaSwag MMLU TruthfulQA Average
SEA-LION 7B Instruct (FP16) 40.78 68.20 27.12 36.29 43.10
SEA-LION 7B Instruct GPTQ (4-Bit, 128 group size) 39.93 67.32 27.11 36.32 42.67

Usage

For the full installation, training and inference guide, please refer to the Github.

In order for SEA-LION-7B-Instruct-GPTQ to work, please install the modified version of the AutoGPTQ library. Installation information can be found here.

SEA-LION can be run using the 🤗 Transformers library

# Please use transformers>=4.37.2

from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch

tokenizer = AutoTokenizer.from_pretrained(
        "aisingapore/sea-lion-7b-instruct-gptq",
        trust_remote_code=True
        )

quantize_config = BaseQuantizeConfig(
        bits=4,
        group_size=128
        )

model = AutoGPTQForCausalLM.from_quantized( # will be loaded to GPU
        "aisingapore/sea-lion-7b-instruct-gptq",
        device = "cuda:0",
        quantize_config = quantize_config,
        torch_dtype=torch.float16,
        trust_remote_code = True
        )

generation_kwargs = {
        "do_sample": False,  # set to true if temperature is not 0
        "temperature": None,
        "max_new_tokens": 256,
        "top_k": 50,
        "top_p": 0.7,
        "repetition_penalty": 1.2,
        "eos_token_id": tokenizer.eos_token_id
        }

prompt_template = "### USER:\n{human_prompt}\n\n### RESPONSE:\n"
prompt_in = """Apa sentimen dari kalimat berikut ini?
Kalimat: Buku ini sangat membosankan.
Jawaban: """

full_prompt = prompt_template.format(human_prompt=prompt_in)

tokens = tokenizer(full_prompt, return_tensors="pt")

input_ids = tokens["input_ids"].to("cuda:0") # move tokenized input to GPU

# Remove unneeded kwargs
if generation_kwargs["do_sample"] == False:
    generation_kwargs.pop("temperature")
    generation_kwargs.pop("top_k")
    generation_kwargs.pop("top_p")

output = model.generate(
        input_ids = input_ids,
        **generation_kwargs
        )

print(tokenizer.decode(output[0], skip_special_tokens=True))

Prompting Guide

Coming soon

Caveats

It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Firstly, like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning. Finally, it should be noted that the model has not been optimized for multi-turn dialogue interactions, which may result in reduced effectiveness in extended conversations.

Limitations

Safety

Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.

Commercially Non-Permissive and Commercially Permissive SEA-LION Releases

The previous release of the commercially non-permissive SEA-LION-Instruct-Research enabled us to explore the full research potential of SEA-LION when allowed to take full advantage of what is publicly available. In contrast, in building the commercially permissive SEA-LION-7B-Instruct, we had to leave out high-quality instruction data that was either proprietary, restricted by non-commercial licenses or in a legal gray area, leaving us with a much smaller proportion of commercially permissive data to work with — a problem that is even more pronounced for low-resource languages. We thus hope this will sound a call to action for more initiatives to create commercially viable data in the region, enabling practical benefits for all.

Technical Specifications

Fine-Tuning Details

The SEA-LION-7B-Instruct was fine-tuned using 8x A100-40GB using parameter efficient fine tuning in the form of LoRA.

Data

SEA-LION-7B-Instruct was trained on a wide range of instructions that were manually and stringently verified by our team. A large portion of the effort was dedicated to ensuring that each instruction-completion pair that the model sees is of a high quality and any errors were corrected and rewritten by native speakers or else dropped from our mix.

In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.

Link to dataset: coming soon

Call for Contributions

We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions.

The Team

Bui Quang Huy (NUS)
Jasshan Kumeresh (NUS)
Ng Boon Cheong Raymond
Siow Bryan
Teng Walter

Acknowledgements

AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.

Contact

For more info, please contact us using this SEA-LION Inquiry Form

Link to SEA-LION's GitHub repository

Disclaimer

This is the repository for the commercial instruction-tuned model. The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.

Downloads last month
8
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Collection including aisingapore/sea-lion-7b-instruct-gptq