Wrong scale factor?

#1
by fmello93 - opened

AFAIK, scaling factor should be 32, but the config says it’s 8.

Are you saying it is supposed to be different for the 3.2 models? Because all of the 3.1 models on HF have a factor of 8 as well.

Yes, it’s supposed to be 32 for llama 3.2. I confirmed internally. I guess that there’s someone looking into it

Possibly related, but I'm getting gibberish from the model when asking for long context output (seems worse with high temperature). e.g. prompt of "write me a very very very long story about frogs. at least 10000 words." It seems to devolve into gibberish and then at some point recovers with "I apologize for the error. It seems that my previous response was corrupted and contained a large amount of nonsensical text. I'll start again with a new story."

It was updated. Would you mind trying again? @daking

@fmello93 I still get gibberish. Only at very high temperatures (>0.95). Its very strange. It generates gibberish, and then at some point "realizes" and says something along the lines of "I apologize for the error. It seems that my previous response was corrupted and contained a large amount of nonsensical text. I'll start again with a new story." and then carries on with the story.

Also, should the scaling factor update be made to the base models as well?

Also, to be clear, I'm using the model enablement code in the private wheel of llama-models, so I updated the scaling factor (which was hardcoded as 8) there.

The update to 32 definitely help, still occasionally results in gibberish with high temperatures, but seems less. Also @fmello93 should the base model configs also be updated to have a scaling factor of 32?

Sanyam changed discussion status to closed

Sign up or log in to comment