Ali Elfilali


AI & ML interests

NLP (mainly for Arabic), Reinforcement Learning and Cognitive science


Posts 9

view post
Honestly i don't understand how come we as the open source community haven't surpassed GPT-4 yet ? Like for me it looks like everything is out there just need to be exploited! Clearly specialized small models outperforms gpt4 on downstream tasks ! So why haven't we just trained a 1B-2B really strong general model and then continue pertained and/or finetuned it on datasets for downstream tasks like math, code...well structured as Textbooks format or other datasets formats that have been proven to be really efficient and good! Ounce you have 100 finetuned model, just wrap them all into a FrankenMoE and Voila ✨
And that's just what a NOOB like myself had in mind, I'm sure there is better, more efficient ways to do it ! So the question again, why we haven't yet ? I feel I'm missing something... Right?
view post
Today we launch our space in colab with @dvilasuero & @davanstrien so you can help us translate/correct our curated prompt dataset, that will be used to evaluate the performance of Arabic LLMs laterΒ and help our community to identify how open models perform on Arabic.

How to Get Involved?

1. Visit our Argilla Space and start reviewing prompts.

2. Join our Discord channel in the HuggingFace's discord server to connect with the community and share your insights.