Edit model card

Model card for Mistral-7B-Instruct-Ukrainian

Mistral-7B-UK is a Large Language Model finetuned for the Ukrainian language.

Mistral-7B-UK is trained using the following formula:

  1. Initial finetuning of Mistral-7B-v0.2 using structured and unstructured datasets.
  2. SLERP merge of the finetuned model with a model that performs better than Mistral-7B-v0.2 on OpenLLM benchmark: NeuralTrix-7B
  3. DPO of the final model.

Instruction format

In order to leverage instruction fine-tuning, your prompt should be surrounded by [INST] and [/INST] tokens.

E.g.

text = "[INST]Відповідайте лише буквою правильної відповіді: Елементи експресіонізму наявні у творі: A. «Камінний хрест», B. «Інститутка», C. «Маруся», D. «Людина»[/INST]"

This format is available as a chat template via the apply_chat_template() method:

Model Architecture

This instruction model is based on Mistral-7B-v0.2, a transformer model with the following architecture choices:

  • Grouped-Query Attention
  • Sliding-Window Attention
  • Byte-fallback BPE tokenizer

Datasets - Structured

Datasets - Unstructured

  • Ukrainian Wiki

Datasets - DPO

💻 Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "SherlockAssistant/Mistral-7B-Instruct-Ukrainian"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Citation

If you are using this model in your research and publishing a paper, please help by citing our paper:

BIB

@inproceedings{boros-chivereanu-dumitrescu-purcaru-2024-llm-uk,
    title = "Fine-tuning and Retrieval Augmented Generation for Question Answering using affordable Large Language Models",
    author = "Boros, Tiberiu and Chivereanu, Radu and Dumitrescu, Stefan Daniel and Purcaru, Octavian",
    booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop, LREC-COLING",
    month = may,
    year = "2024",
    address = "Torino, Italy",
    publisher = "European Language Resources Association",
}

APA

Boros, T., Chivereanu, R., Dumitrescu, S., & Purcaru, O. (2024). Fine-tuning and Retrieval Augmented Generation for Question Answering using affordable Large Language Models. In Proceedings of the Third Ukrainian Natural Language Processing Workshop, LREC-COLING. European Language Resources Association.

MLA

Boros, Tiberiu, Radu, Chivereanu, Stefan Daniel, Dumitrescu, Octavian, Purcaru. "Fine-tuning and Retrieval Augmented Generation for Question Answering using affordable Large Language Models." Proceedings of the Third Ukrainian Natural Language Processing Workshop, LREC-COLING. European Language Resources Association, 2024.

Chicago

Boros, Tiberiu, Radu, Chivereanu, Stefan Daniel, Dumitrescu, and Octavian, Purcaru. "Fine-tuning and Retrieval Augmented Generation for Question Answering using affordable Large Language Models." . In Proceedings of the Third Ukrainian Natural Language Processing Workshop, LREC-COLING. European Language Resources Association, 2024.

Downloads last month
208
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for SherlockAssistant/Mistral-7B-Instruct-Ukrainian

Adapters
1 model
Merges
2 models
Quantizations
1 model

Spaces using SherlockAssistant/Mistral-7B-Instruct-Ukrainian 6