Text Generation
Inference Endpoints

How did you train this? Which hyperparameters did you select?

by PhilipMay - opened

... maybe you could share this?
If it is trained with Axolotl: perhaps you could share the configuration with us?

The training was a multi-step one using the AxonZeta platform (not released yet - it's still under development, incorporating different tools such as HF / Accelerate, Axolotl, LM evals, custom lora merge / factorisation etc. and a lot of orchestration).

The idea is to have multi-epoch training on different datasets where you take the best performing checkpoints onto the next step (i.e. not necessarily assume the best one is at the end of epoch N). This is coupled with a multi-run system where you give a range of hyperparameters to try as well - e.g. full or qlora (and if qlora the rank and delta ranges to train with, and an optional separate delta to merge with), learning rates etc. The platform represents these as a hierarchy of training steps, with the ultimate aim of converge-training a range of performant finetuned agent models.

The inter-step evaluation process is interesting - as well as LM Evaluation Harness etc. it also brings in tools such as WeightWatcher to measure the effect of a lora on the base model as it progresses - especially useful on smaller models, where you can overfit / overwhelm the existing weights and end up with a specific but worse-off model.

I should put all of this in a blog post with more details on the AxonZeta platform with phi-2-orange (or later iterations) as an example. There are also Mistral, a Yi-6 and Yi-34 trials underway as well.

Sign up or log in to comment