Edit model card

Model Card for Model ID

Llama2-7b-Chat-Hf fine-tuned with Turkish Instruction-Response pairs.

Training Data

  • Dataset size: ~75k

Using model

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

model_id = "erythropygia/llama-2-7b-chat-hf-Turkish"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, 
                                             device_map='auto', 
                                             load_in_8bit=True)

sampling_params = dict(do_sample=True, temperature=0.3, top_k=50, top_p=0.9)

pipe = pipeline("text-generation", 
                model=model, 
                tokenizer=tokenizer,
                device_map="auto",
                max_new_tokens=1024, 
                return_full_text=True,
                repetition_penalty=1.1
               )

DEFAULT_SYSTEM_PROMPT = "Sen yardımcı bir asistansın ve sana verilen talimatlar doğrultusunda en iyi cevabı üretmeye çalışacaksın.\n"

TEMPLATE = (
    "[INST] <<SYS>>{system_prompt}<</SYS>>\n\n"
    "{instruction} [/INST]"
)

def generate_prompt(instruction, system_prompt=DEFAULT_SYSTEM_PROMPT):
    return TEMPLATE.format_map({'instruction': instruction,'system_prompt': system_prompt})

def generate_output(user_query, sys_prompt=DEFAULT_SYSTEM_PROMPT):
    prompt = generate_prompt(user_query, sys_prompt)
    outputs = pipe(prompt,
               **sampling_params
              )
    return outputs[0]["generated_text"].split("[/INST]")[-1]

user_query = "Başarılı olmak için 5 yol:"
response = generate_output(user_query)
print(response)

Training Hyperparameters

  • Epochs: 1
  • MaxSteps: 100
  • Context length: 1024
  • LoRA Rank: 16
  • LoRA Alpha: 32
  • LoRA Dropout: 0.05

Training Results

training_loss: 0.96675440790132

Framework versions

  • PEFT 0.8.2
Downloads last month
2
Safetensors
Model size
6.74B params
Tensor type
F32
·
Unable to determine this model’s pipeline type. Check the docs .

Adapter for