Adding Arabic Language Support to Falcon Tokenizer

#4
by adeebDkheel - opened

Hi,

I'm working with the Falcon tokenizer and need to process Arabic text. I'd like to know:

  1. Does the base Falcon tokenizer already support Arabic characters?
  2. If not, what would be the best approach to extend it for Arabic language support?

Has anyone successfully implemented this before? Any guidance or references would be appreciated.

Thank you

Technology Innovation Institute org

Hi @adeebDkheel

Thanks for the issue - unfortunately this tokenizer has not been explicitly trained on arabic language, there are multiple viable solutions that could be:

  • Train a new tokenizer from scratch on arabic only and initialize the model with new Embedding matrix and Language Model head - then re-train your model on Arabic
  • Extend the current tokenizer (not sure how to do that actually) with an arabic tokenizer and retrain the embedding matrix + lm-head

Maybe other team members have better ideas so we can let them chime in the conversation :) - feel free also to join our discord channel and ask this question

Hello ybelkada,

Thank you for the detailed response!

I'd like to explore the second approach you mentioned (Extend the current tokenizer).
Could you or someone from the team help to clarify if a similar approach to
'Extending Llama to a new language' (https://github.com/meta-llama/llama-recipes/tree/main/recipes/use_cases/multilingual)
would work with Falcon3?
I'm particularly interested in understanding if there are any Falcon-specific considerations to keep in mind.

Thanks again and Best Regards,

Technology Innovation Institute org

Hi @adeebDkheel

Thank you !
Falcon3 series model leverages llama architecture so you shouldn't face any issue when trying to use that approach for Falcon 3

Best Regards

adeebDkheel changed discussion status to closed

Sign up or log in to comment