fined-tuned model uploaded in hf is not able to predict

#2
by AlketaR - opened

Hi, I am trying to do inference from my fined-tuned model which is uploaded on my repo in hf:

"from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM

config = PeftConfig.from_pretrained(/ )
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf")
model = PeftModel.from_pretrained(model, / )""

I guess the lines above combine the weights of the pretrained model with the weights created from qlora.
The resulted model must be the fine-tuned model and now i want to predict by using it but it seems that the resulted model does not have the predict function.

"predictions = model.predict(test_examples)[0]" results in "'LlamaForCausalLM' object has no attribute 'predict'".

What am i missing? Thanks in advance!

Hi @AlketaR , thanks for raising the issue.

This is right - the notion of model.predict() is actually something we've internally implemented as a part of the LudwigModel object in Ludwig.

To actually do generation, you may need to follow this guide: https://huggingface.co/docs/transformers/generation_strategies#default-text-generation-configuration

arnavgrg changed discussion status to closed

Sign up or log in to comment