Text Generation
Transformers
Safetensors
Thai
English
llama
conversational
Eval Results
text-generation-inference
Inference Endpoints
leaderboard-pr-bot's picture
Adding Evaluation Results
13bfb73 verified
|
raw
history blame
12.1 kB
metadata
language:
  - th
  - en
license: llama2
datasets:
  - HuggingFaceH4/ultrachat_200k
  - HuggingFaceH4/ultrafeedback_binarized
  - HuggingFaceH4/cai-conversation-harmless
model-index:
  - name: SambaLingo-Thai-Chat
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 52.73
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sambanovasystems/SambaLingo-Thai-Chat
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 78.42
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sambanovasystems/SambaLingo-Thai-Chat
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 43.95
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sambanovasystems/SambaLingo-Thai-Chat
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 40.84
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sambanovasystems/SambaLingo-Thai-Chat
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 72.22
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sambanovasystems/SambaLingo-Thai-Chat
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 8.57
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sambanovasystems/SambaLingo-Thai-Chat
          name: Open LLM Leaderboard

SambaLingo-Thai-Chat

SambaLingo-Thai-Chat is a human aligned chat model trained in Thai and English. It is trained using direct preference optimization on top the base model SambaLingo-Thai-Base. The base model adapts Llama-2-7b to Thai by training on 38 billion tokens from the Thai split of the Cultura-X dataset. Try This Model at SambaLingo-chat-space.

Model Description

Getting Started

Loading Model With Hugging Face

Please make sure to set use_fast=False when loading the tokenizer.

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Thai-Chat", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Thai-Chat", device_map="auto", torch_dtype="auto")

Interacting With Model Pipeline

Please make sure to set use_fast=False when loading the tokenizer.

from transformers import pipeline
pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Thai-Chat", device_map="auto", use_fast=False)
messages = [
                {"role": "user", "content": {YOUR_QUESTION}},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt)[0]
outputs = outputs["generated_text"]

Suggested Inference Parameters

  • Temperature: 0.8
  • Repetition penalty: 1.0
  • Top-p: 0.9

Prompting Guidelines

To prompt this model, please use the following chat template:

<|user|>\n{question}</s>\n<|assistant|>\n

Example Prompts and Generations

<|user|>
ปรัชญาทางเศรษฐกิจที่พระบาทสมเด็จพระมหาภูมิพลอดุลยเดชมหาราช บรมนาถบพิตรมีพระราชดำรัสแก่ชาวไทย มีชื่อว่าอะไร</s>
<|assistant|>
ปรัชญาทางเศรษฐกิจที่พระบาทสมเด็จพระปรมินทรมหาภูมิพลอดุลยเดชทรงมีพระราชดำรัสแก่ชาวไทยเรียกว่า "ปรัชญาเศรษฐกิจพอเพียง" ปรัชญานี้เน้นความสำคัญของการพัฒนาแบบองค์รวมแ    ละความสมดุล โดยคำนึงถึงสภาพแวดล้อมทางสังคม เศรษฐกิจ และสิ่งแวดล้อม ปรัชญาเศรษฐกิจพอเพียงเน้นความสำคัญของการพึ่งพาตนเอง ความมีเหตุผล และการประหยัด ตลอดจนความสำคัญของก    ารพัฒนาจิตใจและจิตวิญญาณ ปรัชญานี้เน้นย้ำถึงความสำคัญของการดำเนินชีวิตอย่างพอประมาณ มีเหตุผล และมีคุณธรรม ตลอดจนความสำคัญของการพัฒนาความรู้และทักษะอย่างต่อเนื่อง ปรัชญาเศรษฐกิ    จพอเพียงได้รับการยกย่องอย่างกว้างขวางว่าเป็นแนวทางแก้ไขปัญหาการพัฒนาที่ยั่งยืนและเป็นแนวทางในการดำเนินชีวิตที่นำไปสู่ความสุขและความเป็นอยู่ที่ดี

Training Details

The alignment phase follows the recipe for Zephyr-7B, and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).

The SFT phase was done on the ultrachat_200k dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.

The DPO phase was done on the ultrafeedback dataset and cai-conversation-harmless dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.

Tokenizer Details

We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.

Uses

Direct Use

Use of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.

Out-of-Scope Use

SambaLingo should NOT be used for:

  • Mission-critical applications
  • Applications that involve the safety of others
  • Making highly important decisions

Bias, Risks, and Limitations

Like all LLMs, SambaLingo has certain limitations:

  • Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
  • Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
  • Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
  • Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
  • Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.

Acknowledgments

We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.

We would like to give a special thanks to the following groups:

  • Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
  • Nguyen et al for open sourcing CulturaX dataset
  • CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
  • EleutherAI for their open source evaluation framework
  • Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo

Cite SambaLingo

@software{sambalingo,
  title = {{SambaLingo: Open Source Language Experts}},
  author = {SambaNova Systems},
  url = {https://huggingface.co/sambanovasystems/SambaLingo-Thai-Chat}
  month = {2},
  year = {2024},
  version = {1.0},
}

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 49.45
AI2 Reasoning Challenge (25-Shot) 52.73
HellaSwag (10-Shot) 78.42
MMLU (5-Shot) 43.95
TruthfulQA (0-shot) 40.84
Winogrande (5-shot) 72.22
GSM8k (5-shot) 8.57