zolicsaki commited on
Commit
43f05dd
1 Parent(s): 0ad63b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -19
README.md CHANGED
@@ -28,6 +28,7 @@ SambaLingo-Turkish-Base is a pretrained Bi-lingual Turkish and English model tha
28
  - **Language(s):** Turkish, English
29
  - **Finetuned from model:** [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf)
30
  - **Try the chat version of this model**: [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
 
31
  - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
32
 
33
  ## Getting Started
@@ -53,18 +54,7 @@ All pre-training is done on the [Cultura-X](https://huggingface.co/datasets/uonl
53
  We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
54
 
55
  ## Evaluation
56
-
57
- | | SambaLingo-Turkish-Base | TURNA | bloom-7b1 | xglm-7.5B | mGPT-13B |
58
- |-------------------------------|--------|-----------|-----------|----------|--------|
59
- | Perplexity (Lower Is Better) | **1.589** | 13.435 | 2.804 | 1.799 | 2.386 |
60
- | FLORES en->tr (8 shot, CHRF) | **0.474** | - | 0.107 | 0.223 | 0.192 |
61
- | FLORES tr->en (8 shot, CHRF) | **0.566** | - | 0.150 | 0.364 | 0.189 |
62
- | FLORES en->tr (8 shot, BLEU) | **0.132** | - | 0.001 | 0.013 | 0.005 |
63
- | FLORES tr->en (8 shot, BLEU) | **0.269** | - | 0.004 | 0.098 | 0.011 |
64
- | Belebele (3 shot) | **34.89%** | 24.22% | 24.00% | 27.33% | 25.44% |
65
- | SIB-200 (3 shot) | **63.73%** | 18.14% | 42.16% | **63.73%** | 44.61% |
66
- | XCOPA (0 shot) | **69.40%** | 55.80% | 51.20% | 58.40% | 56.80% |
67
- | XNLI (0 shot) | 33.85% | 38.40% | 34.95% | **46.25%** | 38.66% |
68
 
69
  ## Uses
70
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
@@ -107,12 +97,12 @@ We would like to give a special thanks to the following groups:
107
 
108
  ## Cite SambaLingo
109
  ```
110
- @software{sambalingo,
111
- title = {{SambaLingo: Open Source Language Experts}},
112
- author = {SambaNova Systems},
113
- url = {https://huggingface.co/sambanovasystems/SambaLingo--Base}
114
- month = {2},
115
- year = {2024},
116
- version = {1.0},
117
  }
118
  ```
 
28
  - **Language(s):** Turkish, English
29
  - **Finetuned from model:** [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf)
30
  - **Try the chat version of this model**: [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
31
+ - **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
32
  - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
33
 
34
  ## Getting Started
 
54
  We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
55
 
56
  ## Evaluation
57
+ For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
 
 
 
 
 
 
 
 
 
 
 
58
 
59
  ## Uses
60
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
97
 
98
  ## Cite SambaLingo
99
  ```
100
+ @misc{csaki2024sambalingo,
101
+ title={SambaLingo: Teaching Large Language Models New Languages},
102
+ author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
103
+ year={2024},
104
+ eprint={2404.05829},
105
+ archivePrefix={arXiv},
106
+ primaryClass={cs.CL}
107
  }
108
  ```