gentle_speech_translation
์นด์นด์คํก AI ๋งํฌ ๋ณํ๊ธฐ์ ์ญ๊ณตํ ์์ง๋์ด๋ง์ ํตํด ๊ฐ๋ฐํ ๋ํ์ฒด์ ๊ฒฌ๊ณ ํ ์ ์ค์ฒด ๋ณํ๊ธฐ์ ๋๋ค.
t5๋ชจ๋ธ์ ํ๊ตญ์ด ์ ์ฉ ๋ฐ์ดํฐ๋ก ํ์ตํ pko-t5-base ๋ชจ๋ธ์ ํ์ฉํ์ฌ ํ์ต์ ์งํํ์ต๋๋ค.
Usage
transformers์ pipe, from_pretrained ๋ฑ์ api๋ฅผ ํ์ฉํ์ฌ ์ ๊ทผ ๊ฐ๋ฅํฉ๋๋ค.
Example
from transformers import T5TokenizerFast, T5ForConditionalGeneration, pipeline
# ๋ชจ๋ธ ๊ฒฝ๋ก ๋ฐ cpu,gpu ์ง์
cache_dir = "./hugging_face"
gentle_model_path='9unu/gentle_speech_translation'
gentle_model = T5ForConditionalGeneration.from_pretrained(gentle_model_path, cache_dir=cache_dir)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# transformers ํ์ดํ๋ผ์ธ ์์ฑ
gentle_pipeline = pipeline(model = gentle_model, tokenizer = tokenizer, device = device, max_length=60)
# text ๋งํฌ ๋ณํ
text = "๋ฐฅ ๋จน๋ ์ค์ด์ผ"
num_return_sequences = 1
max_length = 60
out = gentle_pipeline(text, num_return_sequences = num_return_sequences, max_length=max_length)
print([x['generated_text'] for x in out])
License
๋ณธ ๋ชจ๋ธ์ MIT license ํ์ ๊ณต๊ฐ๋์ด ์์ต๋๋ค.
- Downloads last month
- 17
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.