how to use the trained model to infer ?

#10
by LycheeX - opened

as said in title

Response here.

My sample code:

from transformers import AutoTokenizer, T5ForConditionalGeneration
import torch

device:str = 'cuda' if torch.cuda.is_available() else 'cpu'

tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small").to(device)

for prompt in ["Hello, How are you?", "My name is Arnaud"]:
    print("Input:", prompt)
    inputTokens = tokenizer("translate English to French: {}".format(prompt), return_tensors="pt").to(device)
    outputs = model.generate(inputTokens['input_ids'], attention_mask=inputTokens['attention_mask'], max_new_tokens=50)
    print("Output:", tokenizer.decode(outputs[0], skip_special_tokens=True))

Sign up or log in to comment