zolicsaki commited on
Commit
4905d0c
1 Parent(s): ac09710

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -0
README.md CHANGED
@@ -1,3 +1,110 @@
1
  ---
2
  license: llama2
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: llama2
3
+ datasets:
4
+ - uonlp/CulturaX
5
+ language:
6
+ - th
7
+ - en
8
+ metrics:
9
+ - chrf
10
+ - accuracy
11
+ - bleu
12
  ---
13
+
14
+
15
+
16
+ # SambaLingo-Thai-Base-70B
17
+
18
+ <img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
19
+
20
+ <!-- Provide a quick summary of what the model is/does. -->
21
+ SambaLingo-Thai-Base-70B is a pretrained Bi-lingual Thai and English model that adapts [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) to Thai by training on 26 billion tokens from the Thai split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see [sambanovasystems/SambaLingo-Thai-Chat-70B](https://huggingface.co/sambanovasystems/SambaLingo-Thai-Chat-70B), or try it out at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
22
+
23
+
24
+ ## Model Description
25
+ <!-- Provide a longer summary of what this model is. -->
26
+
27
+ - **Developed by:** [SambaNova Systems](https://sambanova.ai/)
28
+ - **Model type:** Language Model
29
+ - **Language(s):** Thai, English
30
+ - **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf)
31
+ - **Try the chat version of this model**: [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
32
+ - **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
33
+ - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
34
+
35
+ ## Getting Started
36
+
37
+ ### Loading Model With Hugging Face
38
+ ```python
39
+ from transformers import AutoModelForCausalLM, AutoTokenizer
40
+
41
+ tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Thai-Base-70B")
42
+ model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Thai-Base-70B", device_map="auto", torch_dtype="auto")
43
+ ```
44
+
45
+ ### Suggested Inference Parameters
46
+ We suggest setting do_sample=False as this is a pretrained checkpoint.
47
+
48
+ ### Prompting Guidelines
49
+ This model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \<s\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences [sambanovasystems/SambaLingo-Thai-Chat-70B](https://huggingface.co/sambanovasystems/SambaLingo-Thai-Chat-70B).
50
+
51
+ ## Training Details
52
+ All pre-training is done on the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by [Csaki et al.](https://arxiv.org/abs/2311.05741) We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1.
53
+
54
+ ## Tokenizer Details
55
+ We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
56
+
57
+ ## Evaluation
58
+
59
+
60
+ ## Uses
61
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
62
+
63
+ ### Direct Use
64
+
65
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
66
+ Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
67
+
68
+
69
+ ### Out-of-Scope Use
70
+
71
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
72
+ SambaLingo should NOT be used for:
73
+
74
+ - Mission-critical applications
75
+ - Applications that involve the safety of others
76
+ - Making highly important decisions
77
+
78
+ ## Bias, Risks, and Limitations
79
+
80
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
81
+
82
+ Like all LLMs, SambaLingo has certain limitations:
83
+ - Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
84
+ - Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
85
+ - Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
86
+ - Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
87
+ - Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
88
+
89
+ ## Acknowledgments
90
+ We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
91
+
92
+ We would like to give a special thanks to the following groups:
93
+ - Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
94
+ - Nguyen et al for open sourcing CulturaX dataset
95
+ - CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
96
+ - EleutherAI for their open source evaluation framework
97
+ - Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
98
+
99
+
100
+ ## Cite SambaLingo
101
+ ```
102
+ @software{sambalingo,
103
+ title = {{SambaLingo: Open Source Language Experts}},
104
+ author = {SambaNova Systems},
105
+ url = {https://huggingface.co/sambanovasystems/SambaLingo-Thai-Base-70B}
106
+ month = {2},
107
+ year = {2024},
108
+ version = {1.0},
109
+ }
110
+ ```