emre570's picture
Update README.md
364054f verified
metadata
language:
  - en
license: apache-2.0
tags:
  - transformers
  - unsloth
  - llama
  - trl
  - sft
  - peft
base_model: unsloth/llama-3-8b-bnb-4bit
library_name: peft
datasets:
  - myzens/alpaca-turkish-combined

Llama 3-8B Turkish Model

This repo contains the experimental-educational fine-tuned model for the Turkish Llama 3 Project and its variants that can be used for different purposes.

The actual trained model is an adapter model of Unsloth's Llama 3-8B quantized model, which is then converted into .gguf format using llama.cpp and into .bin format for vLLM.

The model is open to further development, we will continue to train the model when we obtain quality data. We can't use every Turkish dataset since some of them has poor quality of translation from English.

You can access the fine-tuning code here.

Trained with NVIDIA L4 with 150 steps, took around 8 minutes.

Example Usages

You can use the adapter model with PEFT.

from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer

base_model = AutoModelForCausalLM.from_pretrained("unsloth/llama-3-8b-bnb-4bit")
model = PeftModel.from_pretrained(base_model, "myzens/llama3-8b-tr-finetuned")
tokenizer = AutoTokenizer.from_pretrained("myzens/llama3-8b-tr-finetuned")

alpaca_prompt = """
Instruction:
{}

Input:
{}

Response:
{}"""

inputs = tokenizer([
    alpaca_prompt.format(
        "",
        "Ankara'da gezilebilecek 3 yeri söyle ve ne olduklarını kısaca açıkla.",
        "",
)], return_tensors = "pt").to("cuda")


outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

You can use it from Transformers:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("myzens/llama3-8b-tr-finetuned")
model = AutoModelForCausalLM.from_pretrained("myzens/llama3-8b-tr-finetuned")

alpaca_prompt = """
Instruction:
{}

Input:
{}

Response:
{}"""

inputs = tokenizer([
    alpaca_prompt.format(
        "",
        "Ankara'da gezilebilecek 3 yeri söyle ve ne olduklarını kısaca açıkla.",
        "",
)], return_tensors = "pt").to("cuda")


outputs = model.generate(**inputs, max_new_tokens=192)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Transformers Pipeline:

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

tokenizer = AutoTokenizer.from_pretrained("myzens/llama3-8b-tr-finetuned")
model = AutoModelForCausalLM.from_pretrained("myzens/llama3-8b-tr-finetuned")

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

alpaca_prompt = """
Instruction:
{}

Input:
{}

Response:
{}"""

input = alpaca_prompt.format(
        "",
        "Ankara'da gezilebilecek 3 yeri söyle ve ne olduklarını kısaca açıkla.",
        "",
)

pipe(input)

Output:

Instruction:


Input:
Ankara'da gezilebilecek 3 yeri söyle ve ne olduklarını kısaca açıkla.

Response:
1. Anıtkabir - Mustafa Kemal Atatürk'ün mezarı
2. Gençlik ve Spor Sarayı - spor etkinliklerinin yapıldığı yer
3. Kızılay Meydanı - Ankara'nın merkezinde bulunan bir meydan

Important Notes

  • We recommend you to use an Alpaca Prompt Template or another template, otherwise you can see generations with no meanings or repeating the same sentence constantly.
  • Use the model with a CUDA supported GPU.

Fine-tuned by emre570.