Embed tokens missing

#2
by ggiret-thinkdeep - opened

I use LLaVA-1.5-13B

I want to add a LoRa on it (any one, to test), like this one for example.
However, I run into an error telling me that the embed tokens do not exist.

After extensive research, here is what I found on the internet:

For certain models, the embedding layers are trained for LoRa (when new words are present only in the finetune dataset for example)
https://github.com/TUDB-Labs/multi-lora-fine-tune/issues/122

LLaMa (and therefore by extension LLaVA) is one of these models.
According to the comment on lines 1334 to 1337 of this code: https://github.com/FartyPants/Training_PRO/blob/main/script.py

# modules_to_save = ["lm_head", "embed_tokens"]
# If you added new tokens to the tokenizer, you may need to save some LoRA modules because they need to know the new tokens.
# For LLaMA and Mistral, you need to save `embed_tokens` and `lm_head`. It may vary for other models.
# `embed_tokens` converts tokens to embeddings, and `lm_head` converts embeddings to token probabilities.

You must then configure the modules ["lm_head", "embed_tokens"] in modules_to_save

In your LoRa these are not configured so it makes an error. Indeed, the embedding layer and the tokens that you have regenerated are not available in the LoRa provided.

Don't hesitate to tell me if I'm wrong about something.

Sign up or log in to comment