File size: 5,962 Bytes
bfd55c0 035fedf bfd55c0 a0f02b1 bfd55c0 035fedf bfd55c0 035fedf bfd55c0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
---
language:
- th
- en
pipeline_tag: text-generation
license: llama3
---
**Llama-3-Typhoon-1.5X-8B-instruct: Thai Large Language Model (Instruct)**
**Llama-3-Typhoon-1.5X-8B-instruct** is an 8 billion parameter instruct model designed for Thai 🇹🇭 language. It demonstrates competitive performance with GPT-3.5-turbo, and is optimized for **production** environments, **Retrieval-Augmented Generation (RAG), constrained generation**, and **reasoning** tasks.
Built on Typhoon 1.5 8B and Llama 3 8B Instruct. This model is a result of our experiment on cross-lingual transfer. It utilizes the [task-arithmetic model editing](https://arxiv.org/abs/2212.04089) technique, combining the Thai understanding capability of Typhoon with the human alignment performance of Llama 3 Instruct.
Remark: To acknowledge Meta's efforts in creating the foundation model and comply with the license, we explicitly include "llama-3" in the model name.
## **Model Description**
- **Model type**: An 8B instruct decoder-only model based on the Llama architecture.
- **Requirement**: Transformers 4.38.0 or newer.
- **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧
- **License**: [**Llama 3 Community License**](https://llama.meta.com/llama3/license/)
## **Performance**
We evaluated the model's performance in **Language & Knowledge Capabilities** and **Instruction Following Capabilities**.
- **Language & Knowledge Capabilities**:
- Assessed using multiple-choice question-answering datasets such as ThaiExam and MMLU.
- **Instruction Following Capabilities**:
- Evaluated based on our beta users' feedback, focusing on two factors:
- **Human Alignment & Reasoning**: Ability to generate responses that are understandable and reasoned across multiple steps.
- Evaluated using [MT-Bench](https://arxiv.org/abs/2306.05685) — How LLMs can answer embedded knowledge to align with human needs.
- **Instruction-following**: Ability to adhere to specified constraints in the instruction
- Evaluated using [IFEval](https://arxiv.org/abs/2311.07911) — How LLMs can follow specified constraints, such as formatting and brevity.
Remark: We developed the TH pair by translating the original datasets into Thai and conducting a human verification on them.
### ThaiExam
| Model | ONET | IC | TGAT | TPAT-1 | A-Level | Average (ThaiExam) | MMLU |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Typhoon-1.5 8B | 0.446 | **0.431** | **0.722** | **0.526** | 0.407 | **0.5028** | 0.6136 |
| Typhoon-1.5X 8B | **0.478** | 0.379 | **0.722** | 0.5 | **0.435** | **0.5028** | 0.6369 |
| gpt-3.5-turbo-0125 | 0.358 | 0.279 | 0.678 | 0.345 | 0.318 | 0.3956 | **0.700**** |
** We report the MMLU score that is reported in GPT-4 Tech Report.
### MT-Bench
| Model | MT-Bench Thai | MT-Bench English |
| --- | --- | --- |
| Typhoon-1.5 8B | 6.402 | 7.275 |
| Typhoon-1.5X 8B | **6.902** | 7.9 |
| gpt-3.5-turbo-0125 | 6.186 | **8.181** |
### IFEval
| Model | IFEval Thai | IFEval English |
| --- | --- | --- |
| Typhoon-1.5 8B | **0.548** | 0.676 |
| Typhoon-1.5X 8B | **0.548** | **0.691** |
| gpt-3.5-turbo-0125 | 0.479 | 0.659 |
## Insight
Utilized model editing technique. We found that the most critical feature for generating Thai answers is located in the backend (the upper layers of the transformer block). Accordingly, we incorporated a high ratio of Typhoon in these backend layers.
## **Usage Example**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "scb10x/llama-3-typhoon-v1.5x-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [...] # add message here
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=0.4,
top_p=0.95,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## **Chat Template**
We use the Llama 3 chat template.
```python
{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}
```
## **Intended Uses & Limitations**
This model is experimental and might not be fully evaluated for all use cases. Developers should assess risks in the context of their specific applications.
## **Follow us**
[**https://twitter.com/opentyphoon**](https://twitter.com/opentyphoon)
## **Support**
[**https://discord.gg/CqyBscMFpg**](https://discord.gg/CqyBscMFpg)
## **SCB 10X Typhoon Team**
- Kunat Pipatanakul, Potsawee Manakul, Sittipong Sripaisarnmongkol, Pathomporn Chokchainant, Kasima Tharnpipitchai
- If you find Typhoon-1.5X useful for your work, please cite it using:
```
@article{pipatanakul2023typhoon,
title={Typhoon: Thai Large Language Models},
author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai},
year={2023},
journal={arXiv preprint arXiv:2312.13951},
url={https://arxiv.org/abs/2312.13951}
}
```
## **Contact Us**
- General & Collaboration: [**kasima@scb10x.com**](mailto:kasima@scb10x.com), [**pathomporn@scb10x.com**](mailto:pathomporn@scb10x.com)
- Technical: [**kunat@scb10x.com**](mailto:kunat@scb10x.com) |