How did you make this?

#3
by hackerjohn - opened

I'm trying to learn how to create models such as yours. Would you mind explaining the workflow it takes to create a model like this? Is it a fine-tuned version of controlnet? I'm new to this space, and would love some advice as to how I could get started creating my own stable-diffusion models. Thanks!

Monster labs org

Yes, it is a controlnet based on stable diffusion 1.5, as described in this paper: https://arxiv.org/abs/2302.05543
Here are some resources to get started:

Thanks for the links @achiru . I really appreciate it. So in this case, I'm assuming you created a controlnet trained on a large dataset of QR Codes and their corresponding AI generated QR codes and prompts. How does one create a dataset like this? Would you have to pick out the best AI generated QR Code images manually? How large of a dataset would it take to achieve significant results?

I think this is a very good thing, and its effect is better than other similar products on the market. I also want to make something similar, but I don't know where to start. Can you give me an idea? What kind of data should I prepare to train for this kind of effect?

diffusion_pytorch_model.safetensors
If I use SD, does this model need to be downloaded?

@achiru any hints about the dataset and training parameters?

@achiru Trying to ask again as you might've missed okaris' comment - How was the dataset for creating the model obtained? Did you manually make tens of thousands of creative QR-codes together with their prompts and the original black and white QR-codes and use this as the dataset for control net? (in my mind this doesn't seem likely as it requires an extreme amount of labor, so I'm guessing you used some model or some trick, but I'd like to hear what you actually did)
Thanks in advance for your response :)

Monster labs org

Yeah, sorry I didn't see your comments on this old issue, please refer to the #100 one for more info.

Sign up or log in to comment