Edit model card

image/png

Polka-1.1B-Chat

eryk-mazus/polka-1.1b-chat is the first Polish model trained to act as a helpful, conversational assistant that can be run locally.

The model is based on TinyLlama-1.1B with the custom, extended tokenizer for more efficient Polish text generation, that was additionally pretrained on 5.7 billion tokens. It was then fine-tuned on around 60k synthetically generated and machine-translated multi-turn conversations with the Direct Preference Optimization (DPO) performed on top of it.

Context size: 4,096 tokens

In addition, we're releasing:

Usage

Sample code:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer

model_name = "eryk-mazus/polka-1.1b-chat"

tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
tokenizer.pad_token = tokenizer.eos_token

model = AutoModelForCausalLM.from_pretrained(
    model_name, 
    torch_dtype=torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16,
    device_map="auto"
)
streamer = TextStreamer(tokenizer, skip_prompt=True)

# You are a helpful assistant.
system_prompt = "Jesteś pomocnym asystentem."
chat = [{"role": "system", "content": system_prompt}]

# Compose a short song on programming.
user_input = "Napisz krótką piosenkę o programowaniu."
chat.append({"role": "user", "content": user_input})

# Generate - add_generation_prompt to make sure it continues as assistant
inputs = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_tensors="pt")
# For multi-GPU, find the device of the first parameter of the model
first_param_device = next(model.parameters()).device
inputs = inputs.to(first_param_device)

with torch.no_grad():
    outputs = model.generate(
        inputs,
        pad_token_id=tokenizer.eos_token_id,
        max_new_tokens=512,
        temperature=0.2,
        repetition_penalty=1.15,
        top_p=0.95,
        do_sample=True,
        streamer=streamer,
    )

# Add just the new tokens to our chat
new_tokens = outputs[0, inputs.size(1):]
response = tokenizer.decode(new_tokens, skip_special_tokens=True)
chat.append({"role": "assistant", "content": response})

The model works seamlessly with vLLM as well.

Prompt format

This model uses ChatML as the prompt format:

<|im_start|>system
Jesteś pomocnym asystentem.
<|im_start|>user
Jakie jest dzienne zapotrzebowanie kaloryczne dorosłej osoby?<|im_end|>
<|im_start|>assistant
Dla dorosłych osób zaleca się spożywanie około 2000-3000 kcal dziennie, aby utrzymać optymalne zdrowie i dobre samopoczucie.<|im_end|>

This prompt is available as a chat template, which means you can format messages using the tokenizer.apply_chat_template() method, as demonstrated in the example above.

Downloads last month
47,990
Safetensors
Model size
1.15B params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for eryk-mazus/polka-1.1b-chat

Quantizations
1 model

Dataset used to train eryk-mazus/polka-1.1b-chat

Collection including eryk-mazus/polka-1.1b-chat