Willing to share the dataset for someone else to tune a 13b model?

#16
by Cypherfox - opened

Greetings,
A few small tests have suggested that this is a very good model from a coding perspective, but on an L40 with 48GB I'm still having to run it in 8-bit quantization mode, and getting around 3-4 tokens per second. I know the dataset doesn't necessarily scale to smaller models, but if you'd be willing to share the dataset I'd love to try fine-tuning the 13B code llama model on it myself. (If I can get the GPUs for it. :P )

The nice part is that 13B models both train and infer faster, so hopefully it won't take quite as long.

I understand if it's from internal sources, and can't be shared, I just thought I'd ask.

In any case, whether you can share the dataset or not, is there any chance you can share any information about how you trained it? What tooling, and parameters, for example?

There's a lot of broken models out there that perform well under limited circumstances, or with specific prompts, but fail in more general use in ways that are only fixable with retraining. (E.g. finishing their response, and then just...flowing into random text generation, or dumping unrelated code, which is usually due to the training dataset not using stop tokens.) So I'd like to learn more about how to build models that are solid performers, and don't have those bugs.

Thanks for any help you can give!

-- Morgan

Any luck getting the dataset for fine tuning?

Sign up or log in to comment