Edit model card

Llama-3-Typhoon-1.5X-70B-instruct-awq: Thai Large Language Model (Instruct) - AWQ 4bit quantized

Llama-3-Typhoon-1.5X-70B-instruct is a 70 billion parameter instruct model designed for Thai 🇹🇭 language. It demonstrates competitive performance with GPT-4-0612, and is optimized for application use cases, Retrieval-Augmented Generation (RAG), constrained generation, and reasoning tasks.

Built on Typhoon 1.5 70B (not yet released) and Llama 3 70B Instruct. this model is a result of our experiment on cross-lingual transfer. It utilizes the task-arithmetic model editing technique, combining the Thai understanding capability of Typhoon with the human alignment performance of Llama 3 Instruct.

Remark: To acknowledge Meta's efforts in creating the foundation model and comply with the license, we explicitly include "llama-3" in the model name.

Model Description

Performance

We evaluated the model's performance in Language & Knowledge Capabilities and Instruction Following Capabilities.

  • Language & Knowledge Capabilities:
    • Assessed using multiple-choice question-answering datasets such as ThaiExam and MMLU.
  • Instruction Following Capabilities:
    • Evaluated based on beta users' feedback, focusing on two factors:
      • Human Alignment & Reasoning: Ability to generate responses that are clear and logically structured across multiple steps.
        • Evaluated using MT-Bench — How LLMs can align with human needs.
      • Instruction-following: Ability to adhere to specified constraints in the instructions.
        • Evaluated using IFEval — How LLMs can follow specified constraints, such as formatting and brevity.
  • Agentic Capabilities:

Remark: We developed the Thai (TH) pairs by translating the original datasets into Thai through machine and human methods.

ThaiExam

Model ONET IC TGAT TPAT-1 A-Level Average (ThaiExam) MMLU
Typhoon-1.5X 70B 0.565 0.68 0.778 0.517 0.56 0.620 0.7945
gpt-4-0612 0.493 0.69 0.744 0.509 0.616 0.610 0.864**
--- --- --- --- --- --- --- ---
gpt-4o 0.62 0.63 0.789 0.56 0.623 0.644 0.887**

** We report the MMLU score that is reported in GPT-4o Tech Report.

MT-Bench

Model MT-Bench Thai MT-Bench English
Typhoon-1.5X 70B 8.029 8.797
gpt-4-0612 7.801 8.671
--- --- ---
gpt-4o 8.514 9.184

IFEval

Model IFEval Thai IFEval English
Typhoon-1.5X 70B 0.645 0.810
gpt-4-0612 0.612 0.793*
--- --- ---
gpt-4o 0.737 0.871
  • We report the number from IFEval paper.

Agent

Model GAIA - Thai/English GSM8K - Thai/English HotpotQA - Thai/English
gpt-3.5-turbo-0125 18.42/37.5 70/80 39.56/59
Typhoon-1.5X 70B 17.10/36.25 80/95 52.7/65.83
gpt-4-0612 17.10/38.75 90/100 56.41/76.25
--- --- --- ---
gpt-4o 44.73/57.5 100/100 71.64/76.58

Insight

We utilized model editing techniques and found that the most critical feature for generating accurate Thai answers is located in the backend (the upper layers of the transformer block). Accordingly, we incorporated a high ratio of Typhoon components in these backend layers to enhance our model’s performance.

Usage Example

from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
quant_path = "scb10x/llama-3-typhoon-v1.5x-70b-instruct-awq"
llm = LLM(model=quant_path, quantization='awq', max_model_len=8192)
tokenizer = AutoTokenizer.from_pretrained(quant_path)

messages = [
    // messages here
]
prompts = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=False
)
sampling_params = SamplingParams(repetition_penalty=1.05, top_p=0.6, temperature=0.9, max_tokens=1024, stop=['<|eot_id|>', '<|start_header_id|>', '<|end_header_id|>'])
outputs = llm.generate(prompts, sampling_params=sampling_params)
print(outputs[0].outputs)

Chat Template

We use the Llama 3 chat template.

{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}

Intended Uses & Limitations

This model is experimental and might not be fully evaluated for all use cases. Developers should assess risks in the context of their specific applications.

Follow us

https://twitter.com/opentyphoon

Support

https://discord.gg/CqyBscMFpg

SCB 10X Typhoon Team

  • Kunat Pipatanakul, Potsawee Manakul, Sittipong Sripaisarnmongkol, Natapong Nitarach, Pathomporn Chokchainant, Kasima Tharnpipitchai
  • If you find Typhoon-1.5X useful for your work, please cite it using:
@article{pipatanakul2023typhoon,
    title={Typhoon: Thai Large Language Models}, 
    author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai},
    year={2023},
    journal={arXiv preprint arXiv:2312.13951},
    url={https://arxiv.org/abs/2312.13951}
}

Contact Us

Downloads last month
59
Safetensors
Model size
11.3B params
Tensor type
I32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including scb10x/llama-3-typhoon-v1.5x-70b-instruct-awq