How to get a different responce from the model using the same input

#59
by mans-0987 - opened

I am trying to get several versions of output from the system for the same input and I am always getting the same output.

It seems that if I initiate the model and tokenizer and then try to run the pipeline three times with the same input, it is always generating the same output. How can I change the model seed so it generates a different output for the same input?

Greedy decoding is used by default, you can change the decoding parameters

Can you please elaborate with some sample code?

Change the parameters of model.generate: set num_beams greater than 1, and set do_sample=True

Google org

@mans-0987 did this work for you?

Sign up or log in to comment