Will the unquantized weights be released?
I would like to try this finetune, but I can currently only run GGML models. Could the unquantized weights be made available?
I second this request. Right now only a quantized safetensor file is available
Getting the raw pytorch out would be very helpful, and before you blink Thebloke would have GGML's out in every quant with a superhot version coming later.
Superhot version would not be wise for this model, since it already has superhot within it. It would dilute the existing merge balance.
But yes, I also really want FP16 versions of this and asked the author for this to future proof the model.
Apologies for falling off the grid for some time; I'm uploading the weights at this moment and assembling a detailed model card. Thank you for your patience!
Weights are up! π