changing the number of returned sequences

#1
by stealthwriter - opened

Hi, how we can control the number of returned answers? for example n = 5 returns 5 paraphrased sentences instead of 1

The model was trained with a custom stop sequence \n\n### END. You can use a stopping criteria that looks for this sequence across a batch of sequences. It's not straightforward for sure. I'm planning to update the EOC token in the next training to default as well as update the llm-toys package to support multiple passages, hopefully in the next few days.
On a side note the llm-toys/RedPajama-INCITE-Base-3B-v1-paraphrase-tone model does a better job with paraphrasing in my experience. Planning to release an eval on the two models in a week as well.

interesting, I like your work alot, I tried other paraphrasing models, this one works the best. I will try llm-toys/RedPajama-INCITE-Base-3B-v1-paraphrase-tone now, what dataset did you use to train it?

is there a possible way to contact you? do you have a discord channel?

@stealthwriter the data used is present in the repo llm-toys. Haven't uploaded it to hugginface yet. You can find a description of the data in the readme of the repo.

You can contact me at krum[dot]utsav at gmail. Also created a channel https://discord.gg/wzxcfw45

Sign up or log in to comment