Edit model card

Join our discord

Server Link

License

cc-by-sa-4.0

Model Details

Base Model
maywell/Synatra-7B-v0.3-dpo

Trained On
A100 80GB * 8

Sionic AIμ—μ„œ GPU μžμ›μ„ 지원받아 μ œμž‘λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

Instruction format

It follows ChatML format.

Model Benchmark

TBD

Implementation Code

Since, chat_template already contains insturction format above. You can use the code below.

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-kiqu-7B")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-kiqu-7B")

messages = [
    {"role": "user", "content": "μ‚¬νšŒμ  ν•©μ˜λŠ” μ–΄λ–€ λ§₯λ½μ—μ„œ μ‚¬μš©λ˜λŠ” 말이야?"},
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
Downloads last month
13
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.