This model has been untested as of now!
Simply changing the original Gemma 2b config, changing the names of the weights, and making a lm_head.weights.
lm_head.weights