kogpt_6b_fp16 / README.md
MrBananaHuman
Update README.md
6838fe6
kakao brain์—์„œ ๊ณต๊ฐœํ•œ kogpt 6b model('kakaobrain/kogpt')์„ fp16์œผ๋กœ ์ €์žฅํ•œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.
### ์นด์นด์˜ค๋ธŒ๋ ˆ์ธ ๋ชจ๋ธ์„ fp16์œผ๋กœ ๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•
```python
import torch
from transformers import GPTJForCausalLM
model = GPTJForCausalLM.from_pretrained('kakaobrain/kogpt', cache_dir='./my_dir', revision='KoGPT6B-ryan1.5b', torch_dtype=torch.float16)
```
### fp16 ๋ชจ๋ธ ๋กœ๋“œ ํ›„ ๋ฌธ์žฅ ์ƒ์„ฑ
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1_rLDzhGohJPbOD5I_eTIOdx4aOTp43uK?usp=sharing)
```python
import torch
from transformers import GPTJForCausalLM, AutoTokenizer
model = GPTJForCausalLM.from_pretrained('MrBananaHuman/kogpt_6b_fp16', low_cpu_mem_usage=True))
model.to('cuda')
tokenizer = AutoTokenizer.from_pretrained('MrBananaHuman/kogpt_6b_fp16')
input_text = '์ด์ˆœ์‹ ์€'
input_ids = tokenizer(input_text, return_tensors='pt').input_ids.to('cuda')
output = model.generate(input_ids, max_length=64)
print(tokenizer.decode(output[0]))
>>> ์ด์ˆœ์‹ ์€ ์šฐ๋ฆฌ์—๊ฒŒ ๋ฌด์—‡์ธ๊ฐ€? 1. ๋จธ๋ฆฌ๋ง ์ด๊ธ€์€ ์ž„์ง„์™œ๋ž€ ๋‹น์‹œ ์ด์ˆœ์ธ์ด ๋ณด์—ฌ์ค€
```
### ์ฐธ๊ณ  ๋งํฌ
https://github.com/kakaobrain/kogpt/issues/6?fbclid=IwAR1KpWhuHnevQvEWV18o16k2z9TLgrXkbWTkKqzL-NDXHfDnWcIq7I4SJXM