built on gemma or llama?

#3
by YaoLiu61 - opened

Hi, this work is great! I benefits a lot from it. I have some questions.

In the Technical report, it says:
SeaLLMs are built upon the Llama-2 model and further advanced through continued pretraining, specialized instruction and alignment tuning.

But in the Model Card, it says:
SeaLLM-7B-v2.5 was built on top of Gemma-7b, and underwent large scale SFT and carefully designed alignment.

My questions are:

  1. Was v2.5 built on Gemma or Llama ? when did it change from llama to gemma and why ?
  2. If v2.5 was built on Gemma, it there a continue pretraining stage ?

Thanks a lot !

SeaLLMs - Language Models for Southeast Asian Languages org

@YaoLiu61
The technical report was written for v1 and it was llama-2 based. seallm-7b-v2 is based on mistral-7b, and v2.5 is based on gemma-7b. The reason is we rely on strong English performance of other base model to build stronger performances in SEA languages. So we continuously update and improve SeaLLM models not only through better SEA-training-pipeline but also through the strongest English-focused models.

v2.5 is also continued pre-trained from gemma. Sorry for the confusion.

Sign up or log in to comment