sermons:
we need the sermons in larger complete sections ; for refferenceing passages we need that bit seperate.
as the model cannot understand the sermon:
I used a text extractor in python : to extract from text files into a json file with the document title as the one colum and the text as the other colum :
I used chunking to chunk each text into 4098 sized chucks which is good for training : as well as easy for the model to take in batches:
After i trained my model on this : i was able to ask questions : i used the unsloth (text generation task for short storys) to train the model ; so the data was basically downloaded into the model without a prompt ! ... so just plain information !
this way after when i asked questions these texted formed the internal context it drew from :
So after i trained the questiona and answer tasks and language pairs and the tasks were very easy for the model to accept: (given it allready had the background knowledge:)
With the bible i also created the task AFTER to recall a passage from thre bible (the bible refferece) >> so it could bring the exact passages back .... easy !
i used the transaltion bible to transalte from swahli to english bible .... so it also was easy task ... (i traned my model for swahli sentence pairs and swahil wkipeadia)...
in chat i asked the model to produce a full timelne for the biblcal events with the key characters utilizing thier births and deaths as sgnficant events .... and it created some amazing work!! <i added roman history after to also gauge a perspective which could link the bible to todays tmeline(as well as including persian history)......
SO...
Now if i add the sermans and the cambridge bible (with all refferences in depth , in thier original languiges to english and fuilly defined) ... the model could indeed produce sermians on any topic using the bible as the key refference ... and it will draw from internal context ! instead of the need for a rag system :
hence requesting this data in a more iusable way my friend !
Hello!
Apologies for getting across to you late. The dataset was generated by randomly chunking sermons extracted from youtube and then using an internally trained T4 model to label the bible passages accordingly. I would have to check if i can make the T4 model open source so we can generate a much larger version of this dataset with larger chunks for your usecase. Would that be helpful?