7B, 33B and 65B versions?

#2
by flashvenom - opened

Are you planning on training adapters for different versions? Also if you don't mind, do you have the code / dataset you used to train these? Would love to experiment more with these SuperHOT LoRA's

I am not releasing the dataset yet, since I am still experimenting with it.

I am currently training 30B 8K, I will upload it shortly. I encourage anyone to make their own LoRA/finetune with this method.

Fair enough, thanks for the information!

qq: are you going to release a 7B version, curious to see how it works

Sign up or log in to comment