File size: 3,321 Bytes
8d4ecd0
0e4d0fd
 
 
51c4faf
0e4d0fd
 
 
 
cc2ae0a
 
 
 
8d4ecd0
 
51c4faf
8d4ecd0
 
 
 
 
51c4faf
 
cc2ae0a
 
 
 
 
 
 
 
 
 
 
 
51c4faf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c870478
51c4faf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c870478
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
language:
- ko
- en
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- anthracite-org/magnum-v4-12b
- mistralai/Mistral-Nemo-Instruct-2407
- werty1248/Mistral-Nemo-NT-Ko-12B-dpo
---

# spow12/MK_Nemo_12B

### Model Description

<!-- Provide a longer summary of what this model is. -->

This model is a Supervised fine-tuned version of [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) with DeepSpeed and trl for korean.

Merge methods.
```yaml
models:
    - model: anthracite-org/magnum-v4-12b
    - model: mistralai/Mistral-Nemo-Instruct-2407
    - model: spow12/Mistral-Nemo-Instruct-2407_sft_ver_4.4(private)
    - model: werty1248/Mistral-Nemo-NT-Ko-12B-dpo
merge_method: model_stock
base_model: spow12/Mistral-Nemo-Instruct-2407_sft_ver_4.4(private)
dtype: bfloat16
```

### Trained Data

- Trained with public, private data (about 130K)

### Usage
```python
from transformers import TextStreamer, pipeline, AutoTokenizer, AutoModelForCausalLM

model_id = 'spow12/MK_Nemo_12B'
tokenizer = AutoTokenizer.from_pretrained(model_id)
# %%
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    attn_implementation="flash_attention_2",  #Optional
    device_map='auto',
)
model.eval()

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map='auto')

generation_configs = dict(
    max_new_tokens=2048,
    num_return_sequences=1, 
    temperature=0.75,
    # repetition_penalty=1.1,
    do_sample=True,
    top_k=20,
    top_p=0.9,
    min_p=0.1,
    eos_token_id=tokenizer.eos_token_id,
    pad_token_id=tokenizer.eos_token_id,
    streamer = TextStreamer(tokenizer) # Optional, if you want to use streamer, you have to set num_beams=1
)

sys_message = """당신은 μΉœμ ˆν•œ μ±—λ΄‡μœΌλ‘œμ„œ μƒλŒ€λ°©μ˜ μš”μ²­μ— μ΅œλŒ€ν•œ μžμ„Έν•˜κ³  μΉœμ ˆν•˜κ²Œ λ‹΅ν•΄μ•Όν•©λ‹ˆλ‹€. 
μ‚¬μš©μžκ°€ μ œκ³΅ν•˜λŠ” 정보λ₯Ό μ„Έμ‹¬ν•˜κ²Œ λΆ„μ„ν•˜μ—¬ μ‚¬μš©μžμ˜ μ˜λ„λ₯Ό μ‹ μ†ν•˜κ²Œ νŒŒμ•…ν•˜κ³  그에 따라 닡변을 μƒμ„±ν•΄μ•Όν•©λ‹ˆλ‹€.  

항상 맀우 μžμ—°μŠ€λŸ¬μš΄ ν•œκ΅­μ–΄λ‘œ μ‘λ‹΅ν•˜μ„Έμš”."""

message = [
    {
        'role': "system",
        'content': sys_message
    },
    {
        'role': 'user',
        'content': "ν˜„μž¬μ˜ κ²½μ œμƒν™©μ— λŒ€ν•΄ μ–΄λ–»κ²Œ 생각해?."
    }
]
conversation = pipe(message, **generation_configs)
conversation[-1]

#output
ν˜„μž¬μ˜ κ²½μ œμƒν™©μ€ κ°κ΅­λ§ˆλ‹€ λ‹€λ₯΄λ©°, μ „λ°˜μ μœΌλ‘œλŠ” μ½”λ‘œλ‚˜19 팬데믹의 영ν–₯으둜 큰 타격을 받은 μƒνƒœμž…λ‹ˆλ‹€. λ§Žμ€ κ΅­κ°€μ—μ„œ 경제 μ„±μž₯λ₯ μ΄ κ°μ†Œν•˜κ³  μ‹€μ—…λ₯ μ΄ μƒμŠΉν–ˆμŠ΅λ‹ˆλ‹€. κ·ΈλŸ¬λ‚˜ 각ꡭ μ •λΆ€λŠ” μž¬μ •κ³Ό 톡화 정책을 톡해 경제λ₯Ό μ§€μ§€ν•˜κ³  λ³΅κ΅¬ν•˜κΈ° μœ„ν•΄ λ…Έλ ₯ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€. μ½”λ‘œλ‚˜19 λ°±μ‹ μ˜ 개발과 배포가 경제 νšŒλ³΅μ— 도움이 될 κ²ƒμœΌλ‘œ κΈ°λŒ€λ˜κ³  μžˆμŠ΅λ‹ˆλ‹€. κ·ΈλŸ¬λ‚˜ μ½”λ‘œλ‚˜19 μ΄μ „μ˜ 경제 μ„±μž₯λ₯ μ„ νšŒλ³΅ν•˜κΈ° μœ„ν•΄μ„œλŠ” μ‹œκ°„μ΄ 걸릴 수 μžˆμŠ΅λ‹ˆλ‹€. μž₯κΈ°μ μœΌλ‘œλŠ” μ €μ„±μž₯κ³Ό κ³ μΈν”Œλ ˆμ΄μ…˜μ΄ 계속될 수 μžˆλŠ” μœ„ν—˜λ„ μžˆμŠ΅λ‹ˆλ‹€. λ”°λΌμ„œ 각ꡭ은 μ½”λ‘œλ‚˜19 μ΄ν›„μ˜ μ„Έκ³„μ—μ„œ μƒˆλ‘œμš΄ 경제 λͺ¨λΈμ„ λͺ¨μƒ‰ν•˜κ³ , 디지털화와 녹색 경제 μ „ν™˜μ„ κ°€μ†ν™”ν•˜λŠ” λ“± λ―Έλž˜μ— λŒ€λΉ„ν•˜λŠ” λ…Έλ ₯이 ν•„μš”ν•©λ‹ˆλ‹€.
```