zolicsaki commited on
Commit
f901c06
1 Parent(s): 091d806

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -17
README.md CHANGED
@@ -28,6 +28,7 @@ SambaLingo-Serbian-Base is a pretrained Bi-lingual Serbian and English model tha
28
  - **Language(s):** Serbian, English
29
  - **Finetuned from model:** [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf)
30
  - **Try the chat version of this model**: [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
 
31
  - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
32
 
33
  ## Getting Started
@@ -53,16 +54,7 @@ All pre-training is done on the [Cultura-X](https://huggingface.co/datasets/uonl
53
  We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
54
 
55
  ## Evaluation
56
-
57
- | | SambaLingo-Serbian-Base | sr-gpt2 | bloom-7b1 | xglm-7.5B | mGPT-13B |
58
- |-------------------------------|---------|-----------|-----------|----------|--------|
59
- | Perplexity (Lower Is Better) | **1.436** | - | 2.140 | 2.404 | 2.429 |
60
- | FLORES en->sr (8 shot, CHRF) | **0.448** | 0.002 | 0.171 | 0.090 | 0.024 |
61
- | FLORES sr->en (8 shot, CHRF) | **0.625** | 0.071 | 0.206 | 0.257 | 0.026 |
62
- | FLORES en->sr (8 shot, BLEU) | **0.188** | 0.000 | 0.003 | 0.001 | 0.000 |
63
- | FLORES sr->en (8 shot, BLEU) | **0.352** | 0.000 | 0.019 | 0.040 | 0.000 |
64
- | Belebele (3 shot) | **48.33%** | 23.00% | 23.89% | 27.00% | 25.22% |
65
- | SIB-200 (3 shot) | 55.39% | -% | 32.35% | **61.76%** | 39.22% |
66
 
67
  ## Uses
68
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
@@ -105,12 +97,12 @@ We would like to give a special thanks to the following groups:
105
 
106
  ## Cite SambaLingo
107
  ```
108
- @software{sambalingo,
109
- title = {{SambaLingo: Open Source Language Experts}},
110
- author = {SambaNova Systems},
111
- url = {https://huggingface.co/sambanovasystems/SambaLingo-Serbian-Base}
112
- month = {2},
113
- year = {2024},
114
- version = {1.0},
115
  }
116
  ```
 
28
  - **Language(s):** Serbian, English
29
  - **Finetuned from model:** [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf)
30
  - **Try the chat version of this model**: [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
31
+ - **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
32
  - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
33
 
34
  ## Getting Started
 
54
  We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
55
 
56
  ## Evaluation
57
+ For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
 
 
 
 
 
 
 
 
 
58
 
59
  ## Uses
60
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
97
 
98
  ## Cite SambaLingo
99
  ```
100
+ @misc{csaki2024sambalingo,
101
+ title={SambaLingo: Teaching Large Language Models New Languages},
102
+ author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
103
+ year={2024},
104
+ eprint={2404.05829},
105
+ archivePrefix={arXiv},
106
+ primaryClass={cs.CL}
107
  }
108
  ```