Swahili training schedule

#2
by LeroyDyer - opened

First train for translation:
Then for next word generation (raw Swahili texts):
then for question and answer (q&A swahili)
then for tasks ; (harder; we need to take a good dataset perhaps only 100K of tasks and send them to google for translate and the translated dataset can be used to train for instructions :
Later we would expect that it could naturally apply previous english tasks to the swahili requests:
Also we would want to translate a set of GPT conversations to swahili then also apply for conversive:
then repeat the whole cycle even with the same data a few times to embed the information into the llm :
This should give us a few loras and models we can use for merging... to create merged models with swahili data to each be trained on the cycle and remerged!
these merge train and re merge amplifies the data inside : i personally consider a merge like applying a lora : Keep your base model as the swahili model : but also after you have created these merge trained clones you should ony work with other swahili datasets and bots for these models : Enforcing the swahli within : as we would also need some school classes and text books in swahili :

This would be the type of processs to use for any language :
I believe that swahili should form the base for the african llm !
after we can apply some other datasets from the CultureX mulilingual dataset .... ie zulu and other simular languges which should extend its basic knowledge before applying lemba and others like yoroba etc which actually have the common swahili root or influence !
the network should be able to adjust ! to what ever is inputted and respond in the languge or dialect given (filling in where it has none!)

Owner

Hey @LeroyDyer this is a great idea provides a solid framework for advancing natural language understanding in Swahili.

On the translation task, we had fine-tuned an existing pre-trained model using the dataset we had collected since training our own model from the ground up was near impossible since the dataset was not enough and the accuracy of the translation model we created from scratch was poor (https://github.com/Rogendo/Eng-Swa-Translator). I appreciate the structured approach you've proposed, starting with translation tasks and progressing to more complex tasks such as word generation and question and answer tasks. This incremental approach ensures a thorough understanding of Swahili language nuances and grammar.

Considering the importance of Swahili as the base language for African language model development, I fully support the idea of extending our efforts to other African languages influenced by Swahili, such as Zulu and Yoruba. This collaborative approach will not only enrich our models' knowledge but also contribute to the broader goal of advancing African language processing capabilities.

I'm excited to contribute to this initiative and offer my support in any way needed. Together, we can create robust language models that accurately capture the richness of Swahili and other African languages

yes after a day of training also my one was not great ... hakuna shida ... it means we need a large corpus . also :

The best technique i found was mass epoch of a small set ie (50-100) samples. as we need to over fit for the task ! (they always advise against over fitting but we need it to jump start the model )....
i could not get the seq to seq task working on the translator task.... as this is what is required from the data set... also the coloums should be sentence 1 and sentence 2 like the other translator datasetd... but i did find a few other new datasets etc....

but as i say we need mass training epochs of a small dataset until it is 0.0002 trained on that set .... then we need lock it into a lora.... as well as merge the lora into the main model :
to begin again on the extended dataset : 100-1000 samples expect loss performance again but as many epochs as possible again ! until the dataset is overfit! ....
Then we would need a Large dataset of the same data : and train again for some cycles mergeing and saving the loras!... (the lora can be reapplied and merged: multiple times:)
Slowly we will begin to transform a model (to start from a fresh empty model was also very hard) as it needs long training and on colab t also kept cutting .... hence the lorAS... AS These will form the plugins we need to create a bass model from config file in python (the model generates NEW MODEL ) and apply the loras as a jumpstart for this model !....

woah not easy : but we need to use our mistral models to train the loras first ! as obviously the loras are transferable between mistral models !

oh yes in in tanzania ! Daresallaam - Karibu! Kaka

Owner

Ahsante ndugu, nimeshakaribia....
I won't lie, the concept of letting the model overfit is new to me and at the same time the idea isn't that far fetched as with the newly overfitted model, presents several advantages such as capturing the noises or random fluctuations in the training data. This allows the model to become deeply acquainted with the complexities and nuances of the language, including grammatical structures and contextual dependencies that may otherwise be overlooked...
How long have you been working on this?

i just started working on the swahili bots !

I have made a few on my site here :

Some have come from merges (ie the base model + My best model) Then training applied (first i trained on the sentence pairs) - then some short Storys ???? nothing was good !
I found a better technique for quicker training::
1st i train only 25 samples ( maybe 10-20 cycles ) whatever it needs to get the numbers to 0+1 most 0 (point something)
then i change the seed and retrain another 25 , same way. this way i know that these sample have made it inside the brain 100%
After i was doing the Swahili platypus and alpaca ... i did the same technique for 400 samples then merged the lora and keep a lora...
i applied the lora then when i reload the model i use the lora to load it ! and train again the same data set maybe on 400 samples and watch the numbers go down ... usually from 3-4 down to 1+0 if it is not dropping then i go back to 25/25 ....
the model i used was a very good model and was accepting training very well :: so i also remerged it with some of the older Swahili model maybe not great and retrained and reapplied the same lora :::
i also found a dataset of 100 sample which asks direct yes no questions so i also over trained it until 0.001 and when i returned to the same alpaca i was able to realign again :
since it is a 7b model i got bored and went and created a model from my 3b ( i also stared it from nothing so i have been trying to get it to work... but since it is slow i was able to get lcky and use the training to push only text into the model instead of instructs !.... so i pushed the short Storys and all the sentence pairs ... over fitting until 0.001 for most sample s ... so i expect it to be responding but not well as we need more data : ie social media posts !:
the small one has not brother so we have to make them to merge with ie different stages of the training i back one up and he will be a merge later : the top one will always be the one in training .... !!

we have to both go on training paths :::: take one of my Swahili models and the small one and do some training and then i will merge them into my ones and then you can take the remerges ... then we will get good results from different types of training...... i saw the Wikipedia so i will push the Swahili version into it but after its learning faster ... then i can just leave it training for one epoch of the whole data set only : Save the lora 100% and use it to be a base point for over fitting the model to the same data on other days ! : : eventually we will have a chat bot with knowledge and able to make predictions ie solve task it did not see before as we can also translate a few datasets into Swahili etc !

i only put English maths into it : and English coding problems ! ie dolphin coder and the white hat coder etc : : :

we can see what happens there are a few others out there i will try to inspre to do some training on thier own Swahili model and also hope that we can be all merging models (that how they get up top !) my model was already in the top ten so its base is really good (CyberTron) (i changed its name so people would not keep downloading it !

PS im using google cloud !- So if i get a chance i will spend 10 euros ! but we need a good base first so that the long training (ie the Wikipedia and the social meda posts and news article) will have value !

the guy who invented the gpt model said that best to just train on a simple texts or question and answer until its semi overfit then you can fine tune it for tasks ! ::: so we need to text corpus it first !

Owner

This is impressive and I am inspired to even create my own Swahili LLM... Speaking of Swahili AI, there is this conference in Tanzania on 27th of April might interest you.....!

Join the 7th GenAI Meetup: Explore Swahili Language and AI!

Are you interested in the intersection of Artificial Intelligence and the Swahili language? Then the 7th GenAI Meetup is the perfect event for you!

Join us on Saturday, April 27th, 2024 at Dlab - Dar es Salaam to explore the exciting possibilities of Swahili Large Language Models (LLMs).

What to expect:

Theme: African Tech Innovation: Bridging Heritage and Innovation through Swahili LLMs

Keynote Address: Hear from Dr. Michael S. Mollel, a leading expert in Swahili AI.

Swahili AI Hackathon: Put your skills to the test and win prizes!

Networking Opportunities: Connect with AI enthusiasts, language professionals, and tech innovators.

Learn and Be Inspired: Gain valuable insights into the latest advancements in AI for African languages.

Why Attend ?

Expand your skillset and explore new applications of Swahili AI.

Boost your creativity and integrate AI into your projects.

Network with the AI community and connect with like-minded individuals.

Support Tanzania's technological progress and empower the tech landscape.

Register now to secure your spot at this groundbreaking event.

Register for the event:
https://lu.ma/bvblhvle

For more information Contact: +255 744 100 357

We look forward to seeing you there!

Neurotech Africa

Sign up or log in to comment