How to use the model
Import model and tokenizer from transformer libray
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("tykea/mBart-large-50-KQA")
model = AutoModelForSeq2SeqLM.from_pretrained("tykea/mBart-large-50-KQA")
Define function to take question and pass to the model
import torch
#ask function for easier asking
def ask(custom_question):
# Tokenize the input
inputs = tokenizer(
f"qestion: {custom_question}",
return_tensors="pt",
truncation=True,
max_length=512,
padding="max_length"
)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
inputs = {key: value.to(device) for key, value in inputs.items()}
model.eval()
with torch.no_grad():
outputs = model.generate(
input_ids=inputs["input_ids"],
max_length=50,
num_beams=4,
repetition_penalty=2.0,
early_stopping=True,
do_sample=True,
top_k=50,
top_p=0.95,
temperature=0.7,
)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Question: {custom_question}")
print(f"Answer: {answer}")
Then call the function #ask function
question = "ααΎααα’αΌαααΎααα
ααααααααΆ?"
ask(question)
#output
Question: ααΎααα’αΌαααΎααα
ααααααααΆ?
Answer: ααα’αΌαααΎααα
ααααααα
α·α
- Downloads last month
- 32
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for tykea/mBart-large-50-KQA
Base model
facebook/mbart-large-50