Text Generation
Transformers
PyTorch
Thai
English
llama
conversational
text-generation-inference
Inference Endpoints
zolicsaki commited on
Commit
ec785f0
1 Parent(s): 37aa6dd

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +139 -0
README.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - th
4
+ - en
5
+ license: llama2
6
+ datasets:
7
+ - HuggingFaceH4/ultrachat_200k
8
+ - HuggingFaceH4/ultrafeedback_binarized
9
+ - HuggingFaceH4/cai-conversation-harmless
10
+
11
+ ---
12
+
13
+
14
+
15
+ # SambaLingo-Thai-Chat-70B
16
+
17
+ <img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
18
+
19
+ <!-- Provide a quick summary of what the model is/does. -->
20
+ SambaLingo-Thai-Chat-70B is a human aligned chat model trained in Thai and English. It is trained using direct preference optimization on top the base model [SambaLingo-Thai-Base-70B](https://huggingface.co/sambanovasystems/SambaLingo-Thai-Base-70B). The base model adapts [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) to Thai by training on 26 billion tokens from the Thai split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. Try This Model at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
21
+
22
+ ## Model Description
23
+ <!-- Provide a longer summary of what this model is. -->
24
+
25
+ - **Developed by:** [SambaNova Systems](https://sambanova.ai/)
26
+ - **Model type:** Language Model
27
+ - **Language(s):** Thai, English
28
+ - **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf)
29
+ - **Try This Model:** [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space)
30
+ - **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
31
+ - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
32
+
33
+ ## Getting Started
34
+
35
+ ### Loading Model With Hugging Face
36
+ Please make sure to set use_fast=False when loading the tokenizer.
37
+ ```python
38
+ from transformers import AutoModelForCausalLM, AutoTokenizer
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Thai-Chat-70B", use_fast=False)
41
+ model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Thai-Chat-70B", device_map="auto", torch_dtype="auto")
42
+ ```
43
+
44
+ ### Interacting With Model Pipeline
45
+ Please make sure to set use_fast=False when loading the tokenizer.
46
+ ```python
47
+ from transformers import pipeline
48
+ pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Thai-Chat-70B", device_map="auto", use_fast=False)
49
+ messages = [
50
+ {"role": "user", "content": {YOUR_QUESTION}},
51
+ ]
52
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
53
+ outputs = pipe(prompt)[0]
54
+ outputs = outputs["generated_text"]
55
+ ```
56
+
57
+ ### Suggested Inference Parameters
58
+ - Temperature: 0.8
59
+ - Repetition penalty: 1.0
60
+ - Top-p: 0.9
61
+
62
+ ### Prompting Guidelines
63
+ To prompt this model, please use the following chat template:
64
+ ```
65
+ <|user|>\n{question}</s>\n<|assistant|>\n
66
+ ```
67
+
68
+ ### Example Prompts and Generations
69
+ ```
70
+ <|user|>
71
+ ปรัชญาทางเศรษฐกิจที่พระบาทสมเด็จพระมหาภูมิพลอดุลยเดชมหาราช บรมนาถบพิตรมีพระราชดำรัสแก่ชาวไทย มีชื่อว่าอะไร</s>
72
+ <|assistant|>
73
+ ปรัชญาทางเศรษฐกิจที่พระบาทสมเด็จพระปรมินทรมหาภูมิพลอดุลยเดชทรงมีพระราชดำรัสแก่ชาวไทยเรียกว่า "ปรัชญาเศรษฐกิจพอเพียง" ปรัชญานี้เน้นความสำคัญของการพัฒนาแบบองค์รวมแ ละความสมดุล โดยคำนึงถึงสภาพแวดล้อมทางสังคม เศรษฐกิจ และสิ่งแวดล้อม ปรัชญาเศรษฐกิจพอเพียงเน้นความสำคัญของการพึ่งพาตนเอง ความมีเหตุผล และการประหยัด ตลอดจนความสำคัญของก ารพัฒนาจิตใจและจิตวิญญาณ ปรัชญานี้เน้นย้ำถึงความสำคัญของการดำเนินชีวิตอย่างพอประมาณ มีเหตุผล และมีคุณธรรม ตลอดจนความสำคัญของการพัฒนาความรู้และทักษะอย่างต่อเนื่อง ปรัชญาเศรษฐกิ จพอเพียงได้รับการยกย่องอย่างกว้างขวางว่าเป็นแนวทางแก้ไขปัญหาการพัฒนาที่ยั่งยืนและเป็นแนวทางในการดำเนินชีวิตที่นำไปสู่ความสุขและค���ามเป็นอยู่ที่ดี
74
+ ```
75
+
76
+ ## Training Details
77
+ The alignment phase follows the recipe for [Zephyr-7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
78
+
79
+ The SFT phase was done on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
80
+
81
+ The DPO phase was done on the [ultrafeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset and [cai-conversation-harmless](https://huggingface.co/datasets/HuggingFaceH4/cai-conversation-harmless) dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.
82
+
83
+
84
+ ## Tokenizer Details
85
+ We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
86
+
87
+ ## Uses
88
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
89
+
90
+ ### Direct Use
91
+
92
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
93
+ Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
94
+
95
+
96
+ ### Out-of-Scope Use
97
+
98
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
99
+ SambaLingo should NOT be used for:
100
+
101
+ - Mission-critical applications
102
+ - Applications that involve the safety of others
103
+ - Making highly important decisions
104
+
105
+ ## Bias, Risks, and Limitations
106
+
107
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
108
+
109
+ Like all LLMs, SambaLingo has certain limitations:
110
+ - Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
111
+ - Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
112
+ - Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
113
+ - Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
114
+ - Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
115
+
116
+ ## Acknowledgments
117
+ We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
118
+
119
+ We would like to give a special thanks to the following groups:
120
+ - Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
121
+ - Nguyen et al for open sourcing CulturaX dataset
122
+ - CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
123
+ - EleutherAI for their open source evaluation framework
124
+ - Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
125
+
126
+
127
+ ## Cite SambaLingo
128
+ ```
129
+ @software{sambalingo,
130
+ title = {{SambaLingo: Open Source Language Experts}},
131
+ author = {SambaNova Systems},
132
+ url = {https://huggingface.co/sambanovasystems/SambaLingo-Thai-Chat-70B}
133
+ month = {2},
134
+ year = {2024},
135
+ version = {1.0},
136
+ }
137
+ ```
138
+
139
+