Text Generation
Transformers
Safetensors
Serbian
mistral
mergekit
Merge
text-generation-inference
conversational
Inference Endpoints
Edit model card

Yugo60-GPT

  • Developed by: datatab
  • License: mit

🏆 Results

Results obtained through the Serbian LLM evaluation, released by Aleksa Gordić: serbian-llm-eval

  • Evaluation was conducted on a 4-bit version of the model due to hardware resource constraints.
MODEL ARC-E ARC-C Hellaswag BoolQ Winogrande OpenbookQA PiQA
*Yugo55-GPT-v4-4bit 51.41 36.00 57.51 80.92 65.75 34.70 70.54
Yugo55A-GPT 51.52 37.78 57.52 84.40 65.43 35.60 69.43
Yugo60-GPT tbd tbd tbd tbd tbd tbd tbd

💻 Usage

!pip -q install git+https://github.com/huggingface/transformers
!pip install -q datasets loralib sentencepiece
!pip -q install bitsandbytes accelerate
from IPython.display import HTML, display

def set_css():
  display(HTML('''
  <style>
    pre {
        white-space: pre-wrap;
    }
  </style>
  '''))
get_ipython().events.register('pre_run_cell', set_css)
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "datatab/Yugo60-GPT", torch_dtype="auto"
)

tokenizer = AutoTokenizer.from_pretrained(
    "datatab/Yugo60-GPT", torch_dtype="auto"
)

from typing import Optional
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer


def generate(
    user_content: str, system_content: Optional[str] = ""
) -> str:
    system_content = "Ispod je uputstvo koje opisuje zadatak, upareno sa unosom koji pruža dodatni kontekst. Napišite odgovor koji na odgovarajući način kompletira zahtev."

    messages = [
        {
            "role": "system",
            "content": system_content,
        },
        {"role": "user", "content": user_content},
    ]

    tokenized_chat = tokenizer.apply_chat_template(
        messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
    ).to("cuda")

    text_streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
    output = model.generate(
        tokenized_chat,
        streamer=text_streamer,
        max_new_tokens=2048,
        temperature=0.1,
        repetition_penalty=1.11,
        top_p=0.92,
        top_k=1000,
        pad_token_id=tokenizer.pad_token_id,
        eos_token_id=tokenizer.eos_token_id,
        do_sample=True,
    )

    generated_text = tokenizer.decode(output[0], skip_special_tokens=True)

generate("Nabroj mi sve planete suncevog sistemai reci mi koja je najveca planeta")
generate("Koja je razlika između lame, vikune i alpake?")
generate("Napišite kratku e-poruku Semu Altmanu dajući razloge za GPT-4 otvorenog koda")
Downloads last month
5
Safetensors
Model size
7.24B params
Tensor type
BF16
·

Merge of

Datasets used to train datatab/Yugo60-GPT