Seems somewhat related to a Genetic algorithm approach I started working on - any opinions?

#1
by kalomaze - opened

I was in the process of learning to setup Axolotl for QLora purposes after seeing CollectiveCognition, which is a Mistral QLora trained on just 100 samples of instructions. I hypothesized that you can take a genetic algorithim approach of distilling the 'best' examples out of a more diverse dataset by randomizing pairs and tracking which instructions trended towards higher scores for specific benchmarks, and repeating this process until you get a small but diverse dataset in which only the most relevant examples are used so that it does a better job at generalizing to different tasks. Do you have opinions on this approach, or do you think it would be useful in distilling the best examples out of a wider spread of synthetic 'assistant' data? You seem more experienced than me in this field, so I was curious on your thoughts

Sign up or log in to comment