nice kaka
Nice bro !
Question ⁉️
I did not train a whisper model before ? How to train it ?
Was you able to use your own voices ?
Please tell me ?
Asante
Hi brother ! Nice to read you !
I was able to use my own voice using coqui ai
.
To train the whisper model, this is one of the good notebook : https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb
Not with the new version of whisper
who look great and more faster.
For better insight i suggest to you to see what : https://huggingface.co/oza75
are doing for the bambara
language.
Asante.
Asante Bro!
I will check it out as i always wated to train the whisper model :
There are a lot of data sets : I will also look for a way for merging perhaps models :
ITs good to put our own voices inside , as well as speaking native languages for commonn phrases , but also in english with our own ative english accents or french or portugues even !
( we have a few colonial languages in which we have created our own voices inside ! essentially a new dialect ! )
Armed with the new potential african dataset i now wil have a decent voice for my african models which i trained the the TACO ( Alpaca ) datasets and the SALT bible data set :
I actually stoppped but will obviously return and re Align these datasets into the models ( the data is never lost or over riddin it just has low simularity ) So when training for our languges we must also add training for the embeddings : this moves our words ito the correct groupings : so pur words will be analogistic with their eligish or foriegn tonguge counter part ! so that means we can also cross acess the data in english but as swahili etc so the english training has value to the african languge training its just thier subject matter ! ( so when we need to reProgram a model to be african centric We also must push a larger section of the model ! and the embeddings and sometimes even the lmhead in trsaining : just to enable some deep embedding of our data : tring to push in the ground truth, SO : small datasets Over fit ! is the best way to go : then change the prompt again and do the same ( many epochs ) :
If you understand how the ngram model works it is based on probablistic next words : etc all is based on frequecys of the same patern apeaering : SO many important task should be over trained so the task can stand out !
What i find is that a small training so get the model admitiing to the task is the best way then a deep drop of data at any loss rate ! then mass epoch of small sections of the data ... helping to generalise and embedd the task as well as the languge and infromation ! also raising the problabilitys of the data being retrieved with another prompt !
So often i remove the prompt ! and leave only the question and answr and not the instruction ! << Again ading more diverse probablitys ! and enabling for the response not to be connected to the task ! but to its simulariy....
As this was the case when pressing new langugees ito a model !:: ( LLM )
SO i like ot have datasets which have a side by side translation so i can ask to tranlate the text to new lang or ask a question in 1 lang to get the answer in another lang ... and reverse questions and answer after ! to really associate the data in the embedding space !
( just a few training tips ) as i use the unsloth and thre google free teir as well as the monthy 10 euros ! ( so i try to manage my credits to solve these issues ( nobody speaks much about training as they all seem to use very expensive machines ! !!! ) we need to have a better strategy which can be reduced to riun on the local machines ! ( or the free labs ! ) !
Enjoy brother !
I will keep in touch ( although i moveing way out to the border ( mozambique and tanzania ! ( so less internets i will download your model to use offline ) ... ) thanks bro big respect big up !
uzeeit Bleesed !