Are you going to release your Hawk architecture models as well as your larger Griffin models (e.g: Griffin 14B) from your paper?

#9
by Joseph717171 - opened

In your paper, you detail another architecture: Hawk, and you list larger Griffin models (Griffin 14B). Are you going to release these other models as well? πŸ™πŸ˜

Joseph717171 changed discussion title from Are you going to release your hawk models as well as the larger Griffin models from your paper? to Are you going to release your Hawk architecture models as well as your larger Griffin models (e.g: Griffin 14B) from your paper?
Google org

Unfortunately we cannot release trained models from the Griffin paper, as Google can't release weights from models trained on MassiveText (since it contains the Books dataset).

It should be straightforward however to create the config for Hawk from the code we released on GitHub. Eg we instantiate an example model config in this example:
https://github.com/google-deepmind/recurrentgemma/blob/0f5ca57442f17c7309c70b0228fd8e5505cbdaa1/examples/simple_run_jax.py#L43

block_types is a tuple which lists the temporal-mixing blocks used (ie the length of block_types is the model depth). To get a Hawk model, simply repeat "recurrentgemma.TemporalBlockType.RECURRENT" for the desired depth.

This is a good suggestion though, and we will look into adding a clearer description of how to create the different models from the paper. I don't actually know how the HuggingFace code is structured, but I imagine it is also straightforward to create Hawk models from there as well!

Sign up or log in to comment