Can you provide me the interface code

#2
by ar08 - opened
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("ar08/Mistral-bengali-100K")
model = AutoModelForCausalLM.from_pretrained("ar08/Mistral-bengali-100K")

# Input text
input_text = "পর্তুগিজ কোন দেশে কথা বলা হয়?"

# Tokenize input text
input_ids = tokenizer.encode(input_text, return_tensors="pt")

# Generate text
output = model.generate(input_ids, max_length=500, do_sample=True, top_k=50,temperature=0.1)

# Decode the generated output
output_text = tokenizer.decode(output[0], skip_special_tokens=True)

print("Generated Text:", output_text)

I have tried this
but It's not effective I've trained in Alpaca Dataset

Sign up or log in to comment