Iām trying to get a good workflow for my card game project, and Iām running into some difficulty. Iām hoping someone here can point me in the right direction.
Here is a quick summary of what Iām doing.
I use StableDiffusion to generate a ātriple-wideā landscape. I then cut it into thirds such that they are the right dimensions for a playing card. From there, I am using a mixture of controlnet with canny masks and inpainting to keep the edges aligned:
I would like to be able to evolve the landscape by inserting vehicles, cities, environmental effects, etc. The idea is that the landscape evolves as the players make decisions in the game.
Here is a rough draft of a final landscape. All three āthirdsā got generated separately.
This is where Iām stuck.
Right now, I am using a class called āStableDiffusionControlNetInpaintImg2ImgPipelineā that I found on google. It isnāt supported by HF, and it doesnāt support LORA models. I really want to use a workflow that is supported in the diffusers library, and not some random code I found on google. And since the diffusers library has been recently updated to support loading safetensor loras, I really want to find a way to switch. I would like to use loras, either from the community or trained myself, to create consistent results. Right now they can get weird. Can anyone point me in the right direction so I can ditch this weird google code?
Also, I am having a newbie error that I cannot figure out for the life of me. Iāve used the weird google code so far, and it supports my card dimensions of 512x768. However, StableDiffusionInpaintPipeline gives me errors unless the dimensions are 512 by 512. How can I resolve this?
Thank you very much for your time. I could give more examples, but since I am new on the forums, I can only embed one image.