[axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training on a 4x nvidia a40 gpu cluster. the a40 GPU cluster has been graciously provided by [Arc Compute](https://www.arccompute.io/). rank 8 lora merge tune of mistral 7b this is the format ``` ### System: ### Instruction: ### Response: ``` trained on subset of koishi - asss - dolly - hh-rlhf - wizard evol