GGUF
Turkish
English
Inference Endpoints
conversational

SambaLingo-Turkish-Chat

SambaLingo-Turkish-Chat is a human aligned chat model trained in Turkish and English. It is trained using direct preference optimization on top the base model SambaLingo-Turkish-Base. The base model adapts Llama-2-7b to Turkish by training on 42 billion tokens from the Turkish split of the Cultura-X dataset. Try this model at SambaLingo-chat-space.

Model Description

Getting Started

Loading Model With Hugging Face

Please make sure to set use_fast=False when loading the tokenizer.

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Turkish-Chat", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Turkish-Chat", device_map="auto", torch_dtype="auto")

Interacting With Model Pipeline

Please make sure to set use_fast=False when loading the tokenizer.

from transformers import pipeline
pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Turkish-Chat", device_map="auto", use_fast=False)
messages = [
                {"role": "user", "content": {YOUR_QUESTION}},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt)[0]
outputs = outputs["generated_text"]

Suggested Inference Parameters

  • Temperature: 0.8
  • Repetition penalty: 1.0
  • Top-p: 0.9

Prompting Guidelines

To prompt this model, please use the following chat template:

<|user|>\n{question}</s>\n<|assistant|>\n

Training Details

The alignment phase follows the recipe for Zephyr-7B, and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).

The SFT phase was done on the ultrachat_200k dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.

The DPO phase was done on the ultrafeedback dataset and cai-conversation-harmless dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.

Tokenizer Details

We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.

Uses

Direct Use

Use of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.

Out-of-Scope Use

SambaLingo should NOT be used for:

  • Mission-critical applications
  • Applications that involve the safety of others
  • Making highly important decisions

Bias, Risks, and Limitations

Like all LLMs, SambaLingo has certain limitations:

  • Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
  • Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
  • Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
  • Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
  • Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.

Acknowledgments

We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.

We would like to give a special thanks to the following groups:

  • Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
  • Nguyen et al for open sourcing CulturaX dataset
  • CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
  • EleutherAI for their open source evaluation framework
  • Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo

Cite SambaLingo

@software{sambalingo,
  title = {{SambaLingo: Open Source Language Experts}},
  author = {SambaNova Systems},
  url = {https://huggingface.co/sambanovasystems/SambaLingo-Turkish-Chat}
  month = {2},
  year = {2024},
  version = {1.0},
}
Downloads last month
15
GGUF
Model size
6.95B params
Architecture
llama

5-bit

Inference API
Unable to determine this model's library. Check the docs .

Datasets used to train Jobaar/SambaLingo-Turkish-Chat-GGUF