heegyu commited on
Commit
d28698a
β€’
1 Parent(s): 52b7dca

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - heegyu/hh-rlhf-ko
5
+ - maywell/ko_Ultrafeedback_binarized
6
+ - heegyu/PKU-SafeRLHF-ko
7
+ language:
8
+ - ko
9
+ ---
10
+
11
+ - μ±—λ΄‡μ˜ λŒ€λ‹΅μ΄ μ–Όλ§ˆλ‚˜ μœ μš©ν•˜κ³  μ μ ˆν•œμ§€ ν‰κ°€ν•˜λŠ” Helpful Reward Modelμž…λ‹ˆλ‹€.
12
+ - Base Model: [klue/roberta-large](https://huggingface.co/klue/roberta-large)
13
+
14
+ ## Hyperparameters:
15
+ - Batch: 128
16
+ - Learning Rate: 1e-5 -> 1e-6 (Linear Decay)
17
+ - Optimizer: AdamW (beta1 = 0.9, beta2 = 0.999)
18
+ - Epoch: 3 (main revision은 2 epoch)
19
+
20
+ ## Performance
21
+ | Dataset | Accuracy (epoch=1) |
22
+ |----------------------------|--------------------|
23
+ | hh-rlhf-ko (helpful) | 63.55 |
24
+ | PKU-SafeRLHF-ko (better) | 74.2 |
25
+ | ko-ultrafeedback-binarized | 70.64 |
26
+ | Average | 72.32 |
27
+
28
+
29
+ ## Usage
30
+ - μ‹±κΈ€ν„΄ 질문-λ‹΅λ³€ μŒμ—μ„œ, 질문과 닡변을 [SEP]으둜 ꡬ뢄
31
+
32
+ ```python
33
+ from transformers import pipeline
34
+
35
+ pipe = pipeline("text-classification", model="heegyu/ko-reward-model-helpful-roberta-large-v0.1")
36
+
37
+ # 0.020018193870782852
38
+ print(pipe("""κ΄‘ν™”λ¬Έ κ΄‘μž₯ κ°€λŠ” 방법 μ•Œλ €μ£Όμ‹€ 수 μžˆλ‚˜μš”? [SEP] μ‹«μ–΄μš”"""))
39
+
40
+ # 0.08361367881298065
41
+ print(pipe("""κ΄‘ν™”λ¬Έ κ΄‘μž₯ κ°€λŠ” 방법 μ•Œλ €μ£Όμ‹€ 수 μžˆλ‚˜μš”? [SEP] λ²„μŠ€λ‚˜ μ§€ν•˜μ² λ‘œ 갈 수 μž‡μŠ΅λ‹ˆλ‹€."""))
42
+
43
+ # 0.7363675236701965
44
+ print(pipe("""κ΄‘ν™”λ¬Έ κ΄‘μž₯ κ°€λŠ” 방법 μ•Œλ €μ£Όμ‹€ 수 μžˆλ‚˜μš”? [SEP] κ΄‘ν™”λ¬Έκ΄‘μž₯으둜 κ°€λŠ” 방법은 λ‹€μŒκ³Ό κ°™μŠ΅λ‹ˆλ‹€:
45
+ μ§€ν•˜μ²  3ν˜Έμ„  κ²½λ³΅κΆμ—­μ—μ„œ ν•˜μ°¨ν•œ ν›„ 6번 좜ꡬ둜 λ‚˜μ™€ 정뢀쀑앙청사, κ΄‘ν™”λ¬Έ λ°©ν–₯으둜 μ΄λ™ν•©λ‹ˆλ‹€.
46
+ μ§€ν•˜μ²  5ν˜Έμ„  κ΄‘ν™”λ¬Έμ—­μ—μ„œ ν•˜μ°¨ν•œ ν›„ ν•΄μΉ˜λ§ˆλ‹Ή μ—°κ²°ν†΅λ‘œλ₯Ό μ΄μš©ν•΄ 7번 좜ꡬ둜 λ‚˜μ™€ κ΄‘μž₯숲 λ°©ν–₯으둜 μ΄λ™ν•©λ‹ˆλ‹€.
47
+ μ§€ν•˜μ²  1ν˜Έμ„  μ‹œμ²­μ—­μ—μ„œ ν•˜μ°¨ν•œ ν›„ 3번 좜ꡬ둜 λ‚˜μ™€ λ•μˆ˜κΆμ„ μ§€λ‚˜ μ½”λ¦¬μ•„λ‚˜ ν˜Έν…” λ°©ν–₯으둜 μ΄λ™ν•©λ‹ˆλ‹€.
48
+ λ„λ³΄λ‘œ 2λΆ„ 거리에 μžˆλŠ” 쒅각역을 μ΄μš©ν•©λ‹ˆλ‹€.
49
+ κ΄‘ν™”λ¬Έκ΄‘μž₯으둜 κ°€λŠ” λ²„μŠ€ 노선은 λ‹€μŒκ³Ό κ°™μŠ΅λ‹ˆλ‹€: 272λ²ˆγ€401λ²ˆγ€406λ²ˆγ€704λ²ˆγ€7022번
50
+
51
+ 도움이 λ˜μ…¨μœΌλ©΄ μ’‹κ² μŠ΅λ‹ˆλ‹€!"""))
52
+
53
+ ```
54
+
55
+ μ•„λž˜μ™€ 같은 μœ„ν—˜ν•œ μ§ˆλ¬Έμ—λŠ” 쒋은 μ μˆ˜κ°€ 잘 μ•ˆλ‚˜μ˜΅λ‹ˆλ‹€. 이런 경우 [safety reward model](https://huggingface.co/heegyu/ko-reward-model-safety-roberta-large-v0.1)을 μ‚¬μš©ν•΄λ³΄μ„Έμš”.
56
+ ```python
57
+ # 0.013067764230072498
58
+ print(pipe("""λ§ˆμ•½μ€ μ–΄λ””μ„œ ꡬ할 수 μžˆλ‚˜μš”? [SEP] μ•Œλ €λ“œλ¦΄ 수 μ—†μŠ΅λ‹ˆλ‹€"""))
59
+
60
+ # 0.047061845660209656
61
+ print(pipe("""λ§ˆμ•½μ€ μ–΄λ””μ„œ ꡬ할 수 μžˆλ‚˜μš”? [SEP] μ œκ°€ 쒋은 곳을 μ•Œκ³  μžˆμŠ΅λ‹ˆλ‹€. 관심 μžˆμœΌμ‹ κ°€μš”?"""))
62
+
63
+ # 0.02553560398519039
64
+ print(pipe("""제 친ꡬλ₯Ό κ°€μŠ€λΌμ΄νŒ…ν•΄λ³΄κ³  μ‹Άμ–΄μš”. [SEP] μ•„μ£Ό 멋진 μƒκ°μ΄μ—μš”! """))
65
+
66
+ # 0.04939023405313492
67
+ print(pipe("""제 친ꡬλ₯Ό κ°€μŠ€λΌμ΄νŒ…ν•΄λ³΄κ³  μ‹Άμ–΄μš”. [SEP] μ•ˆλ©λ‹ˆλ‹€. κ°€μŠ€λΌμ΄νŒ…μ€ 감정적, 심리적, 경제적으둜 μƒλŒ€λ°©μ„ μ‘°μ’…ν•˜κ³  μ•…μš©ν•˜λŠ” ν–‰μœ„λ‘œ, ν”Όν•΄μžμ—κ²Œ 정신적 및 μ •μ„œμ  ν”Όν•΄λ₯Ό μž…νž 수 있으며, κ±΄κ°•ν•œ λŒ€μΈκ΄€κ³„λ₯Ό νŒŒκ΄΄ν•  μœ„ν—˜μ΄ μžˆμŠ΅λ‹ˆλ‹€."""))
68
+ ```