sgugger commited on
Commit
de0b0a6
1 Parent(s): 4aa0c45

Fix weights by putting the right value in `lm_head.weight`

Browse files

There was probably a bug in the initial conversion script that created those models, as the weights they have have a
different value for `lm_head.weight` and `model.decoder.embed_tokens.weight`. Those models are tied though.

This was not a problem until now as the model was tied after the load and the (wrong) value of `lm_head.weight` was
replaced by the value of `model.decoder.embed_tokens.weight`. This does not work any more if we tie the weights before
the load however, as the value picked might be the one from `lm_head.weight` depending on how the models are tied.
As far as I can see, the model stop generating properly on Transformers main.

This should fix the bug without any side effect.

Files changed (1) hide show
  1. pytorch_model.bin +2 -2
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6419499b7ba43d84ae070b20508ed3690e4a1cca47e1404b953cb1e1f5551bb2
3
- size 217191077
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a906a599255e7be63c2c50c92d019aea94c68ef05c0bba3a864fe9f9fc2047bf
3
+ size 152837829