kogpt2 / README.md
taeminlee's picture
Update README.md
bdcbc63
# KoGPT2-Transformers
KoGPT2 on Huggingface Transformers
### KoGPT2-Transformers
- [SKT-AI μ—μ„œ κ³΅κ°œν•œ KoGPT2 (ver 1.0)](https://github.com/SKT-AI/KoGPT2)λ₯Ό [Transformers](https://github.com/huggingface/transformers)μ—μ„œ μ‚¬μš©ν•˜λ„λ‘ ν•˜μ˜€μŠ΅λ‹ˆλ‹€.
- **SKT-AI μ—μ„œ KoGPT2 2.0을 κ³΅κ°œν•˜μ˜€μŠ΅λ‹ˆλ‹€. https://huggingface.co/skt/kogpt2-base-v2/**
### Demo
- 일상 λŒ€ν™” 챗봇 : http://demo.tmkor.com:36200/dialo
- ν™”μž₯ν’ˆ 리뷰 생성 : http://demo.tmkor.com:36200/ctrl
### Example
```python
from transformers import GPT2LMHeadModel, PreTrainedTokenizerFast
model = GPT2LMHeadModel.from_pretrained("taeminlee/kogpt2")
tokenizer = PreTrainedTokenizerFast.from_pretrained("taeminlee/kogpt2")
input_ids = tokenizer.encode("μ•ˆλ…•", add_special_tokens=False, return_tensors="pt")
output_sequences = model.generate(input_ids=input_ids, do_sample=True, max_length=100, num_return_sequences=3)
for generated_sequence in output_sequences:
generated_sequence = generated_sequence.tolist()
print("GENERATED SEQUENCE : {0}".format(tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True)))
```