Tsunami Model

Tsunami-1.0-7B-Instruct

TSUNAMI: Transformative Semantic Understanding and Natural Augmentation Model for Intelligence.

TSUNAMI full name was created by ChatGPT.


infomation

Tsunami-1.0-7B-Instruct is Thai Large Language Model that fine-tuned from Qwen2.5-7B in Thai dataset.


Author


Prompt Template

This model uses ChatML prompt template:

<|im_start|>system
{System}<|im_end|>
<|im_start|>user
{User}<|im_end|>
<|im_start|>assistant
{Assistant}

How to use

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "Tsunami-th/Tsunami-1.0-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "สวัสดีครับ"}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
inputs = tokenizer(text, return_tensors="pt")
inputs = inputs.to(model.device)
with torch.no_grad():
   output = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.decode(output[0, len(inputs['input_ids'][0]):], skip_special_tokens=True)

Downloads last month
37
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Tsunami-th/Tsunami-1.0-7B-Instruct

Base model

Qwen/Qwen2.5-7B
Finetuned
(169)
this model
Quantizations
2 models

Collection including Tsunami-th/Tsunami-1.0-7B-Instruct