Edit model card

formal_speech_translation

์นด์นด์˜คํ†ก AI ๋งํˆฌ ๋ณ€ํ™˜๊ธฐ์˜ ์—ญ๊ณตํ•™ ์—”์ง€๋‹ˆ์–ด๋ง์„ ํ†ตํ•ด ๊ฐœ๋ฐœํ•œ ๋Œ€ํ™”์ฒด์— ๊ฒฌ๊ณ ํ•œ ์ƒ๋ƒฅ์ฒด ๋ณ€ํ™˜๊ธฐ์ž…๋‹ˆ๋‹ค.

t5๋ชจ๋ธ์„ ํ•œ๊ตญ์–ด ์ „์šฉ ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šตํ•œ pko-t5-base ๋ชจ๋ธ์„ ํ™œ์šฉํ•˜์—ฌ ํ•™์Šต์„ ์ง„ํ–‰ํ–ˆ์Šต๋‹ˆ๋‹ค.

dev_repository

Usage

transformers์˜ pipe, from_pretrained ๋“ฑ์˜ api๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์ ‘๊ทผ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค.

Example

from transformers import T5TokenizerFast, T5ForConditionalGeneration, pipeline

# ๋ชจ๋ธ ๊ฒฝ๋กœ ๋ฐ cpu, gpu ์ง€์ •
cache_dir = "./hugging_face"
gentle_model_path='9unu/formal_speech_translation'
gentle_model = T5ForConditionalGeneration.from_pretrained(formal_model_path, cache_dir=cache_dir)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# transformers ํŒŒ์ดํ”„๋ผ์ธ ์ƒ์„ฑ
gentle_pipeline = pipeline(model = gentle_model, tokenizer = tokenizer, device = device, max_length=60)

# text ๋งํˆฌ ๋ณ€ํ™˜
text = "๋ฐฅ ๋จน๋Š” ์ค‘์ด์•ผ"
num_return_sequences = 1
max_length = 60
out = gentle_pipeline(text, num_return_sequences = num_return_sequences, max_length=max_length)
print([x['generated_text'] for x in out])

License

๋ณธ ๋ชจ๋ธ์€ MIT license ํ•˜์— ๊ณต๊ฐœ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค.

Downloads last month
77
Safetensors
Model size
276M params
Tensor type
F32
ยท
Inference API
This model can be loaded on Inference API (serverless).