small_fut_final / README.md
Dongwookss's picture
Update README.md
0e09167 verified
|
raw
history blame
4.07 kB
metadata
license: apache-2.0
datasets:
  - Dongwookss/q_a_korean_futsal
language:
  - ko
tags:
  - unsloth
  - trl
  - transformer

Model Name : ํ’‹ํ’‹์ด(futfut)

Model Concept

  • ํ’‹์‚ด ๋„๋ฉ”์ธ ์นœ์ ˆํ•œ ๋„์šฐ๋ฏธ ์ฑ—๋ด‡์„ ๊ตฌ์ถ•ํ•˜๊ธฐ ์œ„ํ•ด LLM ํŒŒ์ธํŠœ๋‹๊ณผ RAG๋ฅผ ์ด์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค.
  • Base Model : zephyr-7b-beta
  • ํ’‹ํ’‹์ด์˜ ๋งํˆฌ๋Š” 'ํ•ด์š”'์ฒด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ง๋์— '์–ผ๋งˆ๋“ ์ง€ ๋ฌผ์–ด๋ณด์„ธ์š”! ํ’‹ํ’‹!'๋กœ ์ข…๋ฃŒํ•ฉ๋‹ˆ๋‹ค.

Summary:

  • Unsloth ํŒจํ‚ค์ง€๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ LoRA ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.

  • SFT Trainer๋ฅผ ํ†ตํ•ด ํ›ˆ๋ จ์„ ์ง„ํ–‰

  • ํ™œ์šฉ ๋ฐ์ดํ„ฐ

    • q_a_korean_futsal
      • ๋งํˆฌ ํ•™์Šต์„ ์œ„ํ•ด 'ํ•ด์š”'์ฒด๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ์ธ์‚ฟ๋ง์„ ๋„ฃ์–ด ๋ชจ๋ธ ์ปจ์…‰์„ ์œ ์ง€ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
  • Environment : Colab ํ™˜๊ฒฝ์—์„œ ์ง„ํ–‰ํ•˜์˜€์œผ๋ฉฐ L4 GPU๋ฅผ ์‚ฌ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค.

How to use

Model Load


#!pip install transformers==4.40.0 accelerate
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = 'Dongwookss/small_fut_final'

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
model.eval()

Query

from transformers import TextStreamer
PROMPT = '''Below is an instruction that describes a task. Write a response that appropriately completes the reques๋ฌธ"

messages = [
    {"role": "system", "content": f"{PROMPT}"},
    {"role": "user", "content": f"{instruction}"}
    ]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

text_streamer = TextStreamer(tokenizer)
_ = model.generate(
    input_ids,
    max_new_tokens=4096,
    eos_token_id=terminators,
    do_sample=True,
    streamer = text_streamer,
    temperature=0.6,
    top_p=0.9,
    repetition_penalty = 1.1
)

Fine-Tuning with Unsloth(SFT Trainer)

from unsloth import FastLanguageModel
import torch
from trl import SFTTrainer
from transformers import TrainingArguments

max_seq_length = 256
dtype = None
load_in_4bit = False
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="HuggingFaceH4/zephyr-7b-beta",
    max_seq_length=max_seq_length,
    dtype=dtype,
    load_in_4bit=load_in_4bit,
    #token = ,
)

model = FastLanguageModel.get_peft_model(
    model,
    r=32,
    lora_alpha=64,
    lora_dropout=0.05, 
    target_modules=[
        "q_proj",
        "k_proj",
        "v_proj",
        "o_proj",
        "gate_proj",
        "up_proj",
        "down_proj",
    ],  # ํƒ€๊ฒŸ ๋ชจ๋“ˆ
    bias="none",
    use_gradient_checkpointing="unsloth",
    random_state=123,
    use_rslora=False,
    loftq_config=None,
)

tokenizer.padding_side = "right"

trainer = SFTTrainer(
    model=model,
    tokenizer=tokenizer,
    train_dataset=dataset,
    dataset_text_field="text",
    max_seq_length=max_seq_length,
    dataset_num_proc=2,
    packing=False,
    args=TrainingArguments(
        per_device_train_batch_size=20,
        gradient_accumulation_steps=2,
        warmup_steps=5,
        num_train_epochs=3,
        max_steps = 1761,
        logging_steps = 10,
        learning_rate=2e-5,
        fp16=not torch.cuda.is_bf16_supported(),
        bf16=torch.cuda.is_bf16_supported(),
        optim="adamw_8bit",
        weight_decay=0.01,
        lr_scheduler_type="cosine",
        seed=123,
        output_dir="outputs",
    ),
)

trainer.train()