Could you please make GGUF version too?
Please make the quantized version of this model. Q5_k_m and Q6_k please.
Okay, I’ll look into and try to have it done soon!
thank you so much. I am waiting for it :)
It will probably be a day to two for it to load and train, and save with the quantized version. But, it's currently going!
And thank you for your support!
Had to do a little bit of research on it and was easy to understand. Got that working within minutes, but i had to retrain the entire model twice because, the first time I didn't set it up correctly to save. I am hoping I did it correctly, as it's looking promising, and should have it by today!.... I'm hoping.
Thank you so much for that. I'm looking forward to it!
https://huggingface.co/ayjays132/PhillnetLargeQuantized
you will have to downlad and load it from local for now
Thank you so much for that. Although i was expecting Q5_k_M version instead of .bin files :). You are the best!
You are welcome! Phillnet is custom built off a hybrid mixed of multiple frameworks and pytorch seemed best for saving. I'll look into other ways for you.