Text Generation
Transformers
PyTorch
Safetensors
Arabic
English
llama
conversational
text-generation-inference
Inference Endpoints
zolicsaki's picture
Update README.md
3d6aa5d verified
|
raw
history blame
7.87 kB
---
license: llama2
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
- HuggingFaceH4/cai-conversation-harmless
language:
- ar
- en
---
# SambaLingo-Arabic-Chat-70B
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<!-- Provide a quick summary of what the model is/does. -->
SambaLingo-Arabic-Chat-70B is a human aligned chat model trained in Arabic and English. It is trained using direct preference optimization on top the base model [SambaLingo-Arabic-Base-70B](https://huggingface.co/sambanovasystems/SambaLingo-Arabic-Base-70B). The base model adapts [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) to Arabic by training on 28 billion tokens from the Arabic split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. Try This Model at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Model type:** Language Model
- **Language(s):** Arabic, English
- **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf)
- **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
## Getting Started
### Loading Model With Hugging Face
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Arabic-Chat-70B", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Arabic-Chat-70B", device_map="auto", torch_dtype="auto")
```
### Interacting With Model Pipeline
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Arabic-Chat-70B", device_map="auto", use_fast=False)
messages = [
{"role": "user", "content": {YOUR_QUESTION}},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt)[0]
outputs = outputs["generated_text"]
```
### Suggested Inference Parameters
- Temperature: 0.8
- Repetition penalty: 1.0
- Top-p: 0.9
### Prompting Guidelines
To prompt this model, please use the following chat template:
```
<|user|>\n{question}</s>\n<|assistant|>\n
```
### Example Prompts and Generations
```
<|user|>
ูƒู…ู„ูƒุŒ ู‡ู„ ุงู„ุงุญุณู† ุฃู† ุชูƒูˆู† ู…ุญุจูˆุจ ุฃูˆ ู…ุฎูŠู</s>
<|assistant|>
ู„ุง ุฃุณุชุทูŠุน ุฅุจุฏุงุก ุงู„ุฑุฃูŠ ุฃูˆ ุงู…ุชู„ุงูƒ ุงู„ู…ุดุงุนุฑุŒ ู„ูƒู† ูŠู…ูƒู†ู†ูŠ ุชู‚ุฏูŠู… ุฅุฌุงุจุฉ ู…ูˆุถูˆุนูŠุฉ ุนู„ู‰ ู‡ุฐุง ุงู„ุณุคุงู„. ููŠ ุญูŠู† ุฃู† ูƒูˆู†ูƒ ู…ุญุจูˆุจู‹ุง ูŠู…ูƒู† ุฃู† ูŠุฌู„ุจ ุงู„ุณุนุงุฏุฉ ูˆุงู„ุฑุถุงุŒ ูุฅู† ูƒูˆู†ูƒ ู…ุฎูŠูู‹ุง ูŠู…ูƒู† ุฃู† ูŠูƒูˆู† ู„ู‡ ุฃูŠุถู‹ุง ููˆุงุฆุฏู‡. ุนู„ู‰ ุณุจูŠู„ ุงู„ู…ุซุงู„ุŒ ูŠู…ูƒู† ุฃู† ูŠูƒูˆู† ุงู„ุดุฎุต ุงู„ู…ุฎูŠู ูุนุงู„ุงู‹ ููŠ ุฑุฏุน ุงู„ุณู„ูˆูƒูŠุงุช ุฃูˆ ุงู„ู…ูˆุงู‚ู ุบูŠุฑ ุงู„ู…ุฑุบูˆุจ ููŠู‡ุงุŒ ู…ุซู„ ุงู„ุชุฎุฑูŠุจ ุฃูˆ ุงู„ุชู†ู…ุฑ. ูˆู…ุน ุฐู„ูƒุŒ ู…ู† ุงู„ู…ู‡ู… ุฃู† ู†ุชุฐูƒุฑ ุฃู† ุงู„ุญุจ ูˆุงู„ู‚ุจูˆู„ ู‡ู…ุง ู‚ูŠู…ุชุงู† ู…ู‡ู…ุชุงู† ูŠุฌุจ ุงู„ุณุนูŠ ู„ุชุญู‚ูŠู‚ู‡ู…ุงุŒ ูˆุฃู† ูƒูˆู†ูƒ ู…ุญุจูˆุจู‹ุง ู„ุง ูŠู†ุจุบูŠ ุฃู† ูŠูƒูˆู† ุงู„ู‡ุฏู ุงู„ู†ู‡ุงุฆูŠ. ูˆุจุฏู„ุงู‹ ู…ู† ุฐู„ูƒุŒ ูŠุฌุจ ุฃู† ู†ุณุนู‰ ุฌุงู‡ุฏูŠู† ู„ู†ูƒูˆู† ุทูŠุจูŠู† ูˆุฑุญูŠู…ูŠู† ู…ุน ุงู„ุขุฎุฑูŠู†ุŒ ู…ุน ุงู„ุงุนุชุฑุงู ุฃูŠุถู‹ุง ุจุฃู† ูƒู„ ุดุฎุต ู„ุฏูŠู‡ ู†ู‚ุงุท ุงู„ู‚ูˆุฉ ูˆุงู„ุถุนู ุงู„ุฎุงุตุฉ ุจู‡.
```
## Training Details
The alignment phase follows the recipe for [Zephyr-7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
The SFT phase was done on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
The DPO phase was done on the [ultrafeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset and [cai-conversation-harmless](https://huggingface.co/datasets/HuggingFaceH4/cai-conversation-harmless) dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and ฮฒ=0.1 as the regularization factor for DPO.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Use of this model is governed by the Metaโ€™s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
```
@misc{csaki2024sambalingo,
title={SambaLingo: Teaching Large Language Models New Languages},
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
year={2024},
eprint={2404.05829},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```