File size: 1,289 Bytes
61855bb
7fc9300
 
 
b4e616a
815bda9
7fc9300
f9f9853
7fc9300
61855bb
7fc9300
f9f9853
6838fe6
f9f9853
b4e616a
815bda9
f9f9853
 
270b2e1
f9f9853
 
 
 
38c5c23
815bda9
0c456d4
815bda9
f9f9853
38c5c23
 
f9f9853
 
7fc9300
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
kakao brain์—์„œ ๊ณต๊ฐœํ•œ kogpt 6b model('kakaobrain/kogpt')์„ fp16์œผ๋กœ ์ €์žฅํ•œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.

### ์นด์นด์˜ค๋ธŒ๋ ˆ์ธ ๋ชจ๋ธ์„ fp16์œผ๋กœ ๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•

```python
import torch
from transformers import GPTJForCausalLM

model = GPTJForCausalLM.from_pretrained('kakaobrain/kogpt', cache_dir='./my_dir', revision='KoGPT6B-ryan1.5b', torch_dtype=torch.float16)
```

### fp16 ๋ชจ๋ธ ๋กœ๋“œ ํ›„ ๋ฌธ์žฅ ์ƒ์„ฑ
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1_rLDzhGohJPbOD5I_eTIOdx4aOTp43uK?usp=sharing)

```python
import torch
from transformers import GPTJForCausalLM, AutoTokenizer

model = GPTJForCausalLM.from_pretrained('MrBananaHuman/kogpt_6b_fp16', low_cpu_mem_usage=True))
model.to('cuda')
tokenizer = AutoTokenizer.from_pretrained('MrBananaHuman/kogpt_6b_fp16')

input_text = '์ด์ˆœ์‹ ์€'
input_ids = tokenizer(input_text, return_tensors='pt').input_ids.to('cuda')

output = model.generate(input_ids, max_length=64)
print(tokenizer.decode(output[0]))

>>> ์ด์ˆœ์‹ ์€ ์šฐ๋ฆฌ์—๊ฒŒ ๋ฌด์—‡์ธ๊ฐ€? 1. ๋จธ๋ฆฌ๋ง ์ด๊ธ€์€ ์ž„์ง„์™œ๋ž€ ๋‹น์‹œ ์ด์ˆœ์ธ์ด ๋ณด์—ฌ์ค€

```

### ์ฐธ๊ณ  ๋งํฌ
https://github.com/kakaobrain/kogpt/issues/6?fbclid=IwAR1KpWhuHnevQvEWV18o16k2z9TLgrXkbWTkKqzL-NDXHfDnWcIq7I4SJXM