Is there a guide on how to make embeddings like this
So far only done dreambooth training, thats not possible for 2.0 and I am curious how to do embedings.... cant find a good guide and i notice you say you use automatic1111 ui to do it.
@GamingDaveUK The Automatic1111 repo has some more information on training embeddings here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Textual-Inversion
I have a similar question, but I know how to train embeddings with automatic1111. Instead, what is your suggestion of source images if I wanted to create a TI that would place the main content of a SD generation inside of a mock-up ? For instance, a generation of "a pixar style cat" inside of a blank canvas hanging on a wall. How might the training images have to be structured so that it can generate the main content inside of a mockup? Your training gave me the idea that this should be possible. Thank you in advanced for any feedback!