poor text generation diversity

#43
by matteoperiani - opened

I was testing this model on generation response given a task.
My task is to generate a sentence which contains a desired grammar topic.
For istance, if i will ask to the model to generate a sentence with contains the present simple I would expect something like "The cat is on the table."

What I've found is that, even though the model little struggle on understand the request sometimes, it generate very similar examples even with multiple request.
This had surprised me, becasue I also test with sampling or contrastive search decoding, high temperature and so on.

There's some one that can give some consideration about it. Why a model like this has a lack of creativity?

Thank you, in advance, for your time.

Google org

Hey!
Thank you for the feedback for Gemma. Ive tried this prompt on ollama and I'm getting different generations, what I'm wondering is if the framework is setting a random seed. Are you seeing this with other frameworks as well?

image.png

Hello, thank you for your answer.
Yes, when I did my test I made sure the seed was not set.
anyway, your output are not sign of lack og creativity?
All the sentence starts with “The”, additionally the first 2 have “is chaising the” and generally all the sentence ave the same structure .
With a temperature of 1, these outputs should not be more different ?

Edit:
I did som test by generating with the same prompt 10 times and what I get is very confusing..
prompt:
"Write a sentence with a present simple inside."
Responses:
The sun is shining brightly today, casting long shadows on the warm grass.
The sun is shining brightly today, casting warm rays onto the flowers in my garden.
The sun is shining today, casting warm beams on the parkland below.
The sun is shining today, making it a perfect day for outdoor activities.
The sun is shining brightly today, making it perfect for going outside and playing in the park.
The sun is shining brightly outside today.
The sun is shining brightly today, making it an ideal day for exploring the park.
The sun is shining brightly today, making it a perfect day to go for a walk in the park.
The sun is shining brightly today, creating a beautiful setting for the picnic.
The sun is shining brightly today, making it the perfect day for an outdoor picnic in the park.

Here the code..

Screenshot 2024-05-10 alle 09.25.31.png

Hi @matteoperiani

When tried to prompt 5 sentences at a time then we can see some variations:

image.png

image.png

But if we prompt continuously they repeat mostly start with same The word

image.png

May be because there was no previous context when we are do a single sentence at a time and there are no variations and when prompted again from started it's picking up most probable words and then returning the sentences.

Yep, I think the problem is the same. Anyway, I did different trials with higher temperature and found that for values about 2/3 it start to generate more diverse output even with a single request per time.
I think that SLLMs lack of creativity is an huge drawback, since I found the same issue even with other models.

May be useful, during the fine tuning, add some constraints to limit this? Could be a limitation of data used during train?
I hope that one day, even models with few billions of parameter can develop skills like the bigger ones, it would be very interesting to have a personal and local GPT-4 power .

Good luck for your works!!!
Cheers.

matteoperiani changed discussion status to closed

Sign up or log in to comment