Error Message

#1
by hjconstas - opened

First, thank you for making this, it's super sweet! I was trying to mess with it on my computer but I'm running into this error:

The config attributes {'dropout': 0.0, 'sample_size': 32} were passed to ControlNetModel, but are not expected and will be ignored. Please verify your config.json configuration file.
vae\diffusion_pytorch_model.safetensors not found

I'm super new to coding and hugging face as a whole. Also every time I try running it, the program trys to download safetensors even though I've already installed it. Can you think of any things I should look at to resolve this? Thank you!

Hey hjconstas, thanks for your post.

About the config warning this is just a warning coming from the diffusion config, this message could be ignored.

About downloading the safetensors, usually diffusers cache the downloaded safetensors in the ~/.cache/huggingface/hub directory (on Unix system). I made the choice to change the cached directory to ./cache in the code directory. You could reset the classical behaviour by removing the cache="./cache" in the code. I'm thinking to put the safetensors into the docker so the container will launch with the model instead of downloading them afterward.
This is also usefull if you use docker so you can mount your local ./cache directory to the one in the container to avoid a new download at each start.

To do so you can execute:

docker run --gpus=all --ipc=host -v cache:/app/cache -p 7860:7860 qrcode_diffusion:latest

With you image tag as qrcode_diffusion:latest with docker build -t qrcode_diffusion .

Thank you so much for your help!

Apologies for bothering you again but I really like what you did here and would love to mess around with it. I think I did something wrong as I've never used docker before. Could you give some more of the explicit steps I'd have to do to get this running through docker? I'm probably doing things in the wrong order or way. I'd really appreciate your time and I know that's asking a lot! Thank you :)

Here is a quick recap of the steps you may follow to get the docker image running:

  • Install docker on your machine + nvidia-docker if needed > go to youtube and search for your specific exploitation system
  • Check that docker is working with your GPU and nvidia-cuda with sudo docker run --rm --gpus=all --ipc=host pytorch/pytorch:2.0.0-cuda11.7-cudnn8-runtime nvidia-smi. You should get an output similar to this one:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI ***    Driver Version: ***   CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
...
  • If this failed because the docker deamon is not launch review your docker installation. If it print nothing or a cuda error review your nvidia-docker installation. Then you can continue.
  • Download the repository from here hf with `git clone https://huggingface.co/spaces/blanchon/qrcode-diffusion```
  • Build the docker image and tag it as qrcode-diffusion with docker build -t qrcode-diffusion . (don't forget the dot at the end)
  • Run the docker image with docker run --gpus=all --ipc=host -v cache:/app/cache -p 7860:7860 qrcode_diffusion:latest. Note that the -p 7860:7860 is to access the gradio web interface from your host machine. And note that the -v cache:/app/cache is to cache the model weights on your host machine.
  • Go to http://localhost:7860/ and you should see the gradio interface. You can now play with it.

By the way, Huggingface did finally release my docker image so you can skip the Download and Build part and directly get the image with:

docker run -it -p 7860:7860 --platform=linux/amd64 registry.hf.space/blanchon-qrcode-diffusion:latest python app.py

And use it with the tag registry.hf.space/blanchon-qrcode-diffusion:latest

Another way of doing would be to duplicate my space (use the tool from the top right corner of the space page) and then launch the space in private using a GPU rented from hf (Nvidia T4 Small is enouth to get an image in 20s). However this will cost you some credits.

Thank you so much! I really appreciate you taking the time to help me get started! It means a lot. Have a great day!

It's working!! Thank you again! :)

Great, I suppose I can close this discussions then ;)

blanchon changed discussion status to closed

Sign up or log in to comment