Update config file and Q on float32

#5
by RonanMcGovern - opened

I notice that the Llama 7b model is listed as the name in the config file.

Separately, I see that the dtype is float 32 in the config file. Just confirming that's true? I guess Llama 7B was also trained with float32 but then float16 models are pushed to hub like here

the model is trained with mixed precision, so the model file itself is saved as f32.

thanks! I suppose the model name may be fixed in more final versions as training gets to a close

RonanMcGovern changed discussion status to closed
TinyLlama org

thanks! I suppose the model name may be fixed in more final versions as training gets to a close

Yes Definitely !

Sign up or log in to comment