Bug in logits for BOS token.

#90
by Izarel - opened

For the 7b - it model, both this variant and the new one there is bug in the logits of the first BOS token after inference.
It you look at the output it gives all the weight to token id 2 which is the BOS token itself, in result you'll get zero probabilities for other tokens.

I validated the logits of the next tokens, and they seem to be fine.
You can easily verify it by looking at logits values/ soft max values of the tokenized empty string(the tokenizer will add BOS by default).

It might be a training issue of the base model(wrong token input for prediction).

Hi Izarel, can you share some code to help debug this? that would really help

Google org

A reproducible colab would be perfect if possible, even with a couple of examples!

Here is a link to colab - unfortunately it crashes when I try to load the model there - but it should work on NVIDIA GPU/MPS:
https://colab.research.google.com/drive/1ADRUjfO8k8H8OfDeprRrSr1g1hDH6f33?usp=sharing
Here is a screenshot:

image.png

Google org

Thanks, can you give me a couple days to look into this? Will try and figure out what's going on

Google org

Hey Izarel,
You took the time out to put together a colab so I want to be sure you get a proper explanation from me.

I'm currently unsure if this bug is in the weights, in the HF framework, somewhere else in the env like pytorch, or even deeper (though unlikely). I'm trying to reproduce this on flax to see if it persists. If its not there then we know its likely somewhere in the HF Framework or weights on Hf Hub.

I briefly tried reproducing on colab but was getting OOM memories like you were getting, If I can't try this there, It'll take me over a week to get back to you as I' m away from my home and most relevant my GPU desktop.

As someone that's just trying to run a model I apologize for the delay. I'll get back to you with an answer though.

Cheers

No problem - I appreciate the help!

I've been able to repro it both on MPS and on Cuda- though I loaded the model only via hugging face.

Just getting back on this - are there any updates on the matter?
No rush - just wanted to make sure I'm not missing a potential fix.

Thanks again, really appreciate the effort!

Google org

Hey! Sorry for the delay on my end. I wasn't able to isolate whether this was a weights issue, framework issue, or quantization issue with what is currently available. However with the Gemma 2 release there is going to be updates for all three as well as some extra checks.

https://blog.google/technology/developers/gemini-gemma-developer-updates-may-2024/

Will a June timeline be alright?

Thanks for the investigation!

Unfortunately, I'm purely using the model for research analysis - I had to compute a metric which depends on the outputs of the model, including the logprobs of the first token.
If there is anything you suggest me to check on my side, I'd be more than willing to do the investigation as well.
I'm specifically interested in this variant since it has a lot of existing evaluations on downstream tasks which I had been using.

However - I do understand that it's not a top priority since a new model version is coming.
I did manage to get results on the 2B-it variant, so I have some measurements for the paper anyway (;

Sign up or log in to comment