File size: 4,066 Bytes
2a7d231
 
 
 
 
 
 
 
 
 
 
 
7e6efcb
 
 
 
 
 
 
2a7d231
0b31a44
c1924f3
 
60bbd01
c1924f3
60bbd01
 
 
 
2deac7b
 
 
c1924f3
0e09167
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3d13d2
 
 
 
2deac7b
c1924f3
2deac7b
 
 
 
c1924f3
2deac7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3d13d2
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
---
license: apache-2.0
datasets:
- Dongwookss/q_a_korean_futsal
language:
- ko
tags:
- unsloth
- trl
- transformer
---

### Model Name : ํ’‹ํ’‹์ด(futfut) 

#### Model Concept 

- ํ’‹์‚ด ๋„๋ฉ”์ธ ์นœ์ ˆํ•œ ๋„์šฐ๋ฏธ ์ฑ—๋ด‡์„ ๊ตฌ์ถ•ํ•˜๊ธฐ ์œ„ํ•ด LLM ํŒŒ์ธํŠœ๋‹๊ณผ RAG๋ฅผ ์ด์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค.
- **Base Model** : [zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) 
- ํ’‹ํ’‹์ด์˜ ๋งํˆฌ๋Š” 'ํ•ด์š”'์ฒด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ง๋์— '์–ผ๋งˆ๋“ ์ง€ ๋ฌผ์–ด๋ณด์„ธ์š”~! ํ’‹ํ’‹~!'๋กœ ์ข…๋ฃŒํ•ฉ๋‹ˆ๋‹ค.

<img src="https://cdn-uploads.huggingface.co/production/uploads/66305fd7fdd79b4fe6d6a5e5/7UDKdaPfBJnazuIi1cUVw.png" width="400" height="400">


#### Summary:

- **Unsloth** ํŒจํ‚ค์ง€๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ **LoRA** ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
- **SFT Trainer**๋ฅผ ํ†ตํ•ด ํ›ˆ๋ จ์„ ์ง„ํ–‰
- ํ™œ์šฉ ๋ฐ์ดํ„ฐ
  - [q_a_korean_futsal](https://huggingface.co/datasets/Dongwookss/q_a_korean_futsal)
    - ๋งํˆฌ ํ•™์Šต์„ ์œ„ํ•ด 'ํ•ด์š”'์ฒด๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ์ธ์‚ฟ๋ง์„ ๋„ฃ์–ด ๋ชจ๋ธ ์ปจ์…‰์„ ์œ ์ง€ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
   
- **Environment** : Colab ํ™˜๊ฒฝ์—์„œ ์ง„ํ–‰ํ•˜์˜€์œผ๋ฉฐ L4 GPU๋ฅผ ์‚ฌ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค. 

<details>
  <summary>How to use</summary>
  
  **Model Load**
  
  ``` python
  
  #!pip install transformers==4.40.0 accelerate
  import os
  import torch
  from transformers import AutoTokenizer, AutoModelForCausalLM
  
  model_id = 'Dongwookss/small_fut_final'
  
  tokenizer = AutoTokenizer.from_pretrained(model_id)
  model = AutoModelForCausalLM.from_pretrained(
      model_id,
      torch_dtype=torch.bfloat16,
      device_map="auto",
  )
  model.eval()
  ```

  **Query**

  ```python
  from transformers import TextStreamer
  PROMPT = '''Below is an instruction that describes a task. Write a response that appropriately completes the reques๋ฌธ"
  
  messages = [
      {"role": "system", "content": f"{PROMPT}"},
      {"role": "user", "content": f"{instruction}"}
      ]
  
  input_ids = tokenizer.apply_chat_template(
      messages,
      add_generation_prompt=True,
      return_tensors="pt"
  ).to(model.device)
  
  terminators = [
      tokenizer.eos_token_id,
      tokenizer.convert_tokens_to_ids("<|eot_id|>")
  ]
  
  text_streamer = TextStreamer(tokenizer)
  _ = model.generate(
      input_ids,
      max_new_tokens=4096,
      eos_token_id=terminators,
      do_sample=True,
      streamer = text_streamer,
      temperature=0.6,
      top_p=0.9,
      repetition_penalty = 1.1
  )
  
  ```

</details>

<details>
  <summary>Fine-Tuning with Unsloth(SFT Trainer)</summary>

```python

from unsloth import FastLanguageModel
import torch
from trl import SFTTrainer
from transformers import TrainingArguments

max_seq_length = 256
dtype = None
load_in_4bit = False
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="HuggingFaceH4/zephyr-7b-beta",
    max_seq_length=max_seq_length,
    dtype=dtype,
    load_in_4bit=load_in_4bit,
    #token = ,
)

model = FastLanguageModel.get_peft_model(
    model,
    r=32,
    lora_alpha=64,
    lora_dropout=0.05, 
    target_modules=[
        "q_proj",
        "k_proj",
        "v_proj",
        "o_proj",
        "gate_proj",
        "up_proj",
        "down_proj",
    ],  # ํƒ€๊ฒŸ ๋ชจ๋“ˆ
    bias="none",
    use_gradient_checkpointing="unsloth",
    random_state=123,
    use_rslora=False,
    loftq_config=None,
)

tokenizer.padding_side = "right"

trainer = SFTTrainer(
    model=model,
    tokenizer=tokenizer,
    train_dataset=dataset,
    dataset_text_field="text",
    max_seq_length=max_seq_length,
    dataset_num_proc=2,
    packing=False,
    args=TrainingArguments(
        per_device_train_batch_size=20,
        gradient_accumulation_steps=2,
        warmup_steps=5,
        num_train_epochs=3,
        max_steps = 1761,
        logging_steps = 10,
        learning_rate=2e-5,
        fp16=not torch.cuda.is_bf16_supported(),
        bf16=torch.cuda.is_bf16_supported(),
        optim="adamw_8bit",
        weight_decay=0.01,
        lr_scheduler_type="cosine",
        seed=123,
        output_dir="outputs",
    ),
)

trainer.train()

```


</details>