i added this thology to my models ; using the prompt and giving a space for thoughts :
In this space i found datasets which had some calculations inside as well as the answer ; and added the process of the step by step anyls inside this part of the prompt and the response at the bottomof the response with no explanation:
so the main prompt include the phrase think step by step .... but if you do not give it a space to think then how! <<
SO of these thorys are lovely but they lack practical implementation inside the existing framework ... leaving the domain of ai science flooded wih many avenues of false trails !>>>
Hence adusting the prompt was a simple option :
PRevioulsy i had implmented an augmented response inside the model generation ; using think heads a discussed in this paper and others.... it was also a crystalAI model and the other model by the original QUiet thoughts models:
there was a mini issue with the scripts ... but after i overcame this is was able to create my own frankenmodel , which required remote code or github clone and hack the transformers before compiling it....
but it worked but the training proceduce was now going to be special ..... as well as he MOE models haveing somwhat this power internally , in our models we could alsooutput the datas generared by each head into the respponse or not how ever you choose to config the model:
he prompting method actually works .... as it is simular to installing a new task::: i also used dpo datasets so the rejected thought was the thought and the output was the response ... retainoing the original thought it was replacing even if it did not have it ....( its a sequence like all so it will be recallable even if you choose to frame it like this) .... hence if you use the same prompt to think step by step and add the feild for thoughts ... then you will get the full completion ... activating the task .... but if you do not inact the thought prompt .... does that mean it does not use the methodology ...hidden..... here the idea was not just to have soe type of thouhgt but to structure the thoughts into some form of order.... so i decided to reframe as much datsets in this way , even saving some of the verbose sructred data into usable feilds inside the thoughts area .... like recalling a record from a dataset to the thoughts and retriveing the data from the record like a query !.... and framing the rought process for all entrys in the dataaset and embedding the task at 0.3....
so now the thoughts will be structured: ...... now if you choose the prompt again the model nay generate these missing fields in memeory before putting out the data.... now we have given the model methodologys in the thoughts and examples of how to use the methodolgys in the thoughts to solve simular shaped problems or tasks ... hence mathmatics improving vastly...
as we give the model for addition and subration and simply the calculation in the thoughts before giving a direct answer.... as it is a simple problem that does not require thoughts... but we need to teach the model basic maths first before we can ask complex math questions.... then we can teach the model to use these rulesets to solve basic and complex problems as now it is just using replacement o substitue comon paterns and we want it to predict correct answers and not simuar shapes ... hence we want it to hallucenate simular type problems .. but we need it to calculate based on the context and past methods....
Now .... when we give the model many examples such as this it will havea strue step by step thinking....
recent aditions to this is tree of thoughts.... ie we can also make the model generate sub agents in the ming to perform the task and then produce the output!
so with agentgen it uses something like langchain to do this! so we can capture this data and frame it into a dataset of agent operatins to place in the thoughts section of how it solved the problem using chains of agents internally ..... so we adust our prompt to say ... generate agents to solve this task, develop the task first by passing the step by step instructions to each agent for its specific task to acomplish the gaol of this task: use the responses from each agent to contruct the response for this task, respond to this task in a formatted and factual fashion:
now we can insert the blurb of the trasaction fromt he output of the agentgen or rag or langchain......
now in the future the model can access this prompt to solve tasks internally to produce an output using these internal agents and again by activating the thoughts you will be displaying the verbose of the model ! (you can add Show your thoughts , or hide your thoughts..... so you can send the model the same problem with just input and output but not showing the thoughts (they should still be in the thoughts for training) and in the response you shuld also copy the thoughts to the output .... so when the prompt is not correctly installed with the thoughts feild you can still acess these things from chat mode!.... again you should also train the same data with no prompt as simple input and output simple also first ..... only to +1 not deep just a few random batches to help it converge later when you give it the ability to calculate the answers as when you test it again on simple input output the model would have jumped down automatically!!!.... hence it learned internally ..... very very very cools stuff.... so we can use these new tools like agentgen and langchain temporaryly to gather good data from models and other models as wiell as rag verboses .... to train our models to thing that methodology so we dont need langchain at all as it will be internal!!! <<< hence we trained it to think !! <<<< (tasks with thought patterns)(data should not be based non opinion (only after fact has been entered!!..(only to give the model sarcasim and chat abilitys(so it should be framed as chat in some fake role!!)