kunato commited on
Commit
bfd55c0
•
1 Parent(s): f6393f6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +144 -0
README.md ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - th
4
+ - en
5
+ pipeline_tag: text-generation
6
+ license: llama3
7
+ ---
8
+ **Llama-3-Typhoon-1.5X-8B-instruct: Thai Large Language Model (Instruct)**
9
+
10
+ **Llama-3-Typhoon-1.5X-8B-instruct** is an 8 billion parameter instruct model designed for Thai 🇹🇭 language. It demonstrates competitive performance with GPT-3.5-turbo, and is optimized for **production** environments, **Retrieval-Augmented Generation (RAG), constrained generation**, and **reasoning** tasks.
11
+
12
+ Built on Typhoon 1.5 8B and Llama 3 8B Instruct. This model is a result of our experiment on cross-lingual transfer. It utilizes the [task-arithmetic model editing](https://arxiv.org/abs/2212.04089) technique, combining the Thai understanding capability of Typhoon with the human alignment performance of Llama 3 Instruct.
13
+
14
+ Remark: To acknowledge Meta's efforts in creating the foundation model and comply with the license, we explicitly include "llama-3" in the model name.
15
+
16
+ ## **Model Description**
17
+
18
+ - **Model type**: An 8B instruct decoder-only model based on the Llama architecture.
19
+ - **Requirement**: Transformers 4.38.0 or newer.
20
+ - **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧
21
+ - **License**: [**Llama 3 Community License**](https://llama.meta.com/llama3/license/)
22
+
23
+ ## **Performance**
24
+
25
+ We evaluated the model's performance in **Language & Knowledge Capabilities** and **Instruction Following Capabilities**.
26
+
27
+ - **Language & Knowledge Capabilities**:
28
+ - Assessed using multiple-choice question-answering datasets such as ThaiExam and MMLU.
29
+ - **Instruction Following Capabilities**:
30
+ - Evaluated based on our beta users' feedback, focusing on two factors:
31
+ - **Human Alignment & Reasoning**: Ability to generate responses that are understandable and reasoned across multiple steps.
32
+ - Evaluated using [MT-Bench](https://arxiv.org/abs/2306.05685) — How LLMs can answer embedded knowledge to align with human needs.
33
+ - **Instruction-following**: Ability to adhere to specified constraints in the instruction
34
+ - Evaluated using [IFEval](https://arxiv.org/abs/2311.07911) — How LLMs can follow specified constraints, such as formatting and brevity.
35
+
36
+ Remark: We developed the TH pair by translating the original datasets into Thai and conducting a human verification on them.
37
+
38
+ ### ThaiExam
39
+
40
+ | Model | ONET | IC | TGAT | TPAT-1 | A-Level | Average (ThaiExam) | MMLU |
41
+ | --- | --- | --- | --- | --- | --- | --- | --- |
42
+ | Typhoon-1.5 8B | 0.446 | 0.431 | 0.722 | 0.526 | 0.407 | 0.5028 | 0.6136 |
43
+ | Typhoon-1.5X 8B | 0.478 | 0.379 | 0.722 | 0.5 | 0.435 | 0.5028 | 0.6369 |
44
+ | gpt-3.5-turbo-0125 | 0.358 | 0.279 | 0.678 | 0.345 | 0.318 | 0.3956 | 0.700** |
45
+
46
+ ### MT-Bench
47
+
48
+ | Model | MT-Bench Thai | MT-Bench English |
49
+ | --- | --- | --- |
50
+ | Typhoon-1.5 8B | 6.402 | 7.275 |
51
+ | Typhoon-1.5X 8B | 6.902 | 7.9 |
52
+ | gpt-3.5-turbo-0125 | 6.186 | 8.181 |
53
+
54
+ ### IFEval
55
+
56
+ | Model | IFEval Thai | IFEval English |
57
+ | --- | --- | --- |
58
+ | Typhoon-1.5 8B | 0.548 | 0.676 |
59
+ | Typhoon-1.5X 8B | 0.548 | 0.691 |
60
+ | gpt-3.5-turbo-0125 | 0.479 | 0.659 |
61
+
62
+ ## Insight
63
+
64
+ Utilized model editing technique. We found that the most critical feature for generating Thai answers is located in the backend (the upper layers of the transformer block). Accordingly, we incorporated a high ratio of Typhoon in these backend layers.
65
+
66
+ ## **Usage Example**
67
+
68
+ ```python
69
+ from transformers import AutoTokenizer, AutoModelForCausalLM
70
+ import torch
71
+
72
+ model_id = "scb10x/llama-3-typhoon-v1.5x-8b-instruct"
73
+
74
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
75
+ model = AutoModelForCausalLM.from_pretrained(
76
+ model_id,
77
+ torch_dtype=torch.bfloat16,
78
+ device_map="auto",
79
+ )
80
+
81
+ messages = [...] # add message here
82
+
83
+ input_ids = tokenizer.apply_chat_template(
84
+ messages,
85
+ add_generation_prompt=True,
86
+ return_tensors="pt"
87
+ ).to(model.device)
88
+
89
+ terminators = [
90
+ tokenizer.eos_token_id,
91
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
92
+ ]
93
+
94
+ outputs = model.generate(
95
+ input_ids,
96
+ max_new_tokens=512,
97
+ eos_token_id=terminators,
98
+ do_sample=True,
99
+ temperature=0.4,
100
+ top_p=0.95,
101
+ )
102
+ response = outputs[0][input_ids.shape[-1]:]
103
+ print(tokenizer.decode(response, skip_special_tokens=True))
104
+ ```
105
+
106
+ ## **Chat Template**
107
+
108
+ We use the Llama 3 chat template.
109
+
110
+ ```python
111
+ {% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}
112
+ ```
113
+
114
+ ## **Intended Uses & Limitations**
115
+
116
+ This model is experimental and might not be fully evaluated for all use cases. Developers should assess risks in the context of their specific applications.
117
+
118
+ ## **Follow us**
119
+
120
+ [**https://twitter.com/opentyphoon**](https://twitter.com/opentyphoon)
121
+
122
+ ## **Support**
123
+
124
+ [**https://discord.gg/CqyBscMFpg**](https://discord.gg/CqyBscMFpg)
125
+
126
+ ## **SCB 10X Typhoon Team**
127
+
128
+ - Kunat Pipatanakul, Potsawee Manakul, Sittipong Sripaisarnmongkol, Pathomporn Chokchainant, Kasima Tharnpipitchai
129
+ - If you find Typhoon-1.5X useful for your work, please cite it using:
130
+
131
+ ```
132
+ @article{pipatanakul2023typhoon,
133
+ title={Typhoon: Thai Large Language Models},
134
+ author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai},
135
+ year={2023},
136
+ journal={arXiv preprint arXiv:2312.13951},
137
+ url={https://arxiv.org/abs/2312.13951}
138
+ }
139
+ ```
140
+
141
+ ## **Contact Us**
142
+
143
+ - General & Collaboration: [**kasima@scb10x.com**](mailto:kasima@scb10x.com), [**pathomporn@scb10x.com**](mailto:pathomporn@scb10x.com)
144
+ - Technical: [**kunat@scb10x.com**](mailto:kunat@scb10x.com)