"This model doesn't dramatically improve on the base model's general task performance"
I have been playing around a lot with 1B and of course 1.5B parameter models lately. They are super easy to play around with when you have limited compute, so I love them. I have noticed generalization limitations in every single model in this range I have tested so far. It seems to me that more generalized generalization requires a few more parameters than 1.5B.....
phi 1.5 is primarily to proof high-quality pretraining dataset is comparable to large amount of midocre internet corpora. These smaller models are trained for reasons to project loss prediction so as "foreplay" for larger models before commiting to sacle up.
Yes, I know that is why people construct them. People should research them more though. I am very interested in them for their research value. There are questions that do not get answered if you only treat it as a stepping stone to scaling up to more parameters.
@TuringsSolutions Have you tried using this model inside of the huggingface chat UI? I feel like utilizing some of the tricks written in OpenAI's cookbook would help; namely adding search results and making the model do another pass to first think through a problem before giving a final reply.
That would of course complicate the code to run the model (and the general inference time) quite a bit, but I figure it's worth trying out