Why text_encoder model in the openclip (CLIP ViT-H) library is 3.94G, while the size in this library is 1.36G
What do i have to do if a Google Colab needs a general config.json file on this model and does not find it?
15+ Stable Diffusion Tutorial Videos Both Automatic1111 Web UI for PC and Shivam Google Colab even NMKD GUI - DreamBooth - Textual Inversion - Training - Model Injection - Custom Models - Txt2Img
How can I add the DPM SDE Karas from the Automatic1111 repo to the Diffusers? Is there any easy way to do this without customizing the code?