Text Generation
Transformers
Safetensors
Turkish
English
llama
conversational
text-generation-inference
Inference Endpoints
zolicsaki commited on
Commit
ba900c6
1 Parent(s): 54eb72f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -0
README.md CHANGED
@@ -1,3 +1,105 @@
1
  ---
2
  license: llama2
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: llama2
3
+ datasets:
4
+ - uonlp/CulturaX
5
+ language:
6
+ - tr
7
+ - en
8
+ metrics:
9
+ - chrf
10
+ - accuracy
11
+ - bleu
12
  ---
13
+
14
+
15
+
16
+ # SambaLingo-Turkish-Base
17
+
18
+ <img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
19
+
20
+ <!-- Provide a quick summary of what the model is/does. -->
21
+ SambaLingo-Turkish-Base is a pretrained Bi-lingual Turkish and English model that adapts [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf) to Turkish by training on 63 billion tokens from the Turkish split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model please see [sambanovasystems/SambaLingo-Turkish-Chat](https://huggingface.co/sambanovasystems/SambaLingo-Turkish-Chat).
22
+
23
+ ## Model Description
24
+ <!-- Provide a longer summary of what this model is. -->
25
+
26
+ - **Developed by:** [SambaNova Systems](https://sambanova.ai/)
27
+ - **Model type:** Language Model
28
+ - **Language(s):** Turkish, English
29
+ - **Finetuned from model:** [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf)
30
+ - **Blog Post**: Will be released soon!
31
+
32
+ ## Getting Started
33
+
34
+ ### Loading in model with Hugging Face
35
+ ```python
36
+ from transformers import AutoModelForCausalLM, AutoTokenizer
37
+
38
+ tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Turkish-Base")
39
+ model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Turkish-Base", device_map="auto", torch_dtype="auto")
40
+ ```
41
+
42
+ ### Suggested Inference Parameters
43
+ - Temperature: 0.8
44
+ - Repetition penalty: 1.0
45
+ - Top-p: 0.9
46
+
47
+ ### Suggested Prompting
48
+ This model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \<s\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences [sambanovasystems/SambaLingo-Turkish-Chat](https://huggingface.co/sambanovasystems/SambaLingo-Turkish-Chat).
49
+
50
+ ## Evaluation Results
51
+
52
+ ## Training Details
53
+ All pre-training is done on the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by [Csaki et al.](https://arxiv.org/abs/2311.05741) We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1.
54
+
55
+ ## Uses
56
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
57
+
58
+ ### Direct Use
59
+
60
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
61
+ This model is intended for commercial and research use.
62
+
63
+
64
+ ### Out-of-Scope Use
65
+
66
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
67
+ SambaLingo should NOT be used for:
68
+
69
+ - Mission-critical applications
70
+ - Applications that involve the safety of others
71
+ - Making highly important decisions
72
+
73
+ ## Bias, Risks, and Limitations
74
+
75
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
76
+
77
+ Like all LLMs, SambaLingo has certain limitations:
78
+ - Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
79
+ - Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
80
+ - Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
81
+ - Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
82
+ - Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
83
+
84
+ ## Acknowledgments
85
+ We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been achievable without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
86
+
87
+ We would like to give a special thanks to the following groups
88
+ Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
89
+ Nguyen et al for open sourcing CulturaX dataset
90
+ CohereAI for their amazing work with AYA-101 and open sourcing a multilingual instruction tuning dataset
91
+ EleutherAI for their open source evaluation framework
92
+ Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
93
+
94
+
95
+ ## Cite SambaLingo
96
+ ```
97
+ @software{sambalingo,
98
+ title = {{SambaLingo: Language Experts Adapted From Llama}},
99
+ author = {SambaNova Systems},
100
+ url = {https://huggingface.co/sambanovasystems/SambaLingo-Turkish-Base}
101
+ month = {2},
102
+ year = {2024},
103
+ version = {1.0},
104
+ }
105
+ ```