t5-en-vi-small / README.md
nhanv's picture
Update README.md
82f25f4

T5-EN-VI-SMALL:Pretraining Text-To-Text Transfer Transformer for english vietnamese translation

Results

The results test-2013-en-vi

Model BLEU (Beam Search)
Luong & Manning (2015) 23.30
Sequence-to-sequence model with attention 26.10
Neural Phrase-based Machine Translation Huang et. al. (2017) 27.69
Neural Phrase-based Machine Translation + LM Huang et. al. (2017) 28.07
t5-en-vi-small (pretraining) 28.46 (cased)
t5-en-vi-small (pretraining) 29.23 (uncased)

Example Using

import torch

from transformers import MT5ForConditionalGeneration, T5Tokenizer
import torch
if torch.cuda.is_available():       
    device = torch.device("cuda")

    print('There are %d GPU(s) available.' % torch.cuda.device_count())

    print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
    print('No GPU available, using the CPU instead.')
    device = torch.device("cpu")

model = MT5ForConditionalGeneration.from_pretrained("NlpHUST/t5-en-vi-small")
tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-en-vi-small")
model.to(device)

src = "In school , we spent a lot of time studying the history of Kim Il-Sung , but we never learned much about the outside world , except that America , South Korea , Japan are the enemies ."
tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device)
model.eval()
summary_ids = model.generate(
                    tokenized_text,
                    max_length=128, 
                    num_beams=5,
                    repetition_penalty=2.5, 
                    length_penalty=1.0, 
                    early_stopping=True
                )
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)

Output

trường, chúng tôi dành nhiều thời gian để nghiên cứu về lịch sử Kim Il-Sung, nhưng chúng tôi chưa bao giờ học được nhiều về thế giới bên ngoài, ngoại trừ Mỹ, Hàn Quốc, Nhật Bản là kẻ thù.