Function Scalability?

#2
by Starlento - opened

It is really great to see certain work which really focus on improving life efficiency without acting as a chatbot.

It seems you are tokenizing the functions (you think this is very important) and train only on those functions to let model learn how to use these tools by model parameters rather than paste them in prompt. This do save the VRAM since it is working in backend.

But I want to know whether you have some "best practice" to add more functions later?
And why you think making it a special token in the tokenizer (256022) is better than just a short special free text while keep the original tokenizer (256000)?

Nexa AI org

Hi, Starlento, thanks for your great questions. If you don't use the special token:

  1. To generate a correct function name like take_photo, you will need two token prediction hit to succeed;
  2. If you still use take_photo, these tokens are also present in other natural language containing "take" or "photo". And model wouldn't be able to differentiate them well.

Sign up or log in to comment