I’m trying to get a good workflow for my card game project, and I’m running into some difficulty. I’m hoping someone here can point me in the right direction.
Here is a quick summary of what I’m doing.
I use StableDiffusion to generate a ‘triple-wide’ landscape. I then cut it into thirds such that they are the right dimensions for a playing card. From there, I am using a mixture of controlnet with canny masks and inpainting to keep the edges aligned:
I would like to be able to evolve the landscape by inserting vehicles, cities, environmental effects, etc. The idea is that the landscape evolves as the players make decisions in the game.
Here is a rough draft of a final landscape. All three ‘thirds’ got generated separately.
This is where I’m stuck.
Right now, I am using a class called “StableDiffusionControlNetInpaintImg2ImgPipeline” that I found on google. It isn’t supported by HF, and it doesn’t support LORA models. I really want to use a workflow that is supported in the diffusers library, and not some random code I found on google. And since the diffusers library has been recently updated to support loading safetensor loras, I really want to find a way to switch. I would like to use loras, either from the community or trained myself, to create consistent results. Right now they can get weird. Can anyone point me in the right direction so I can ditch this weird google code?
Also, I am having a newbie error that I cannot figure out for the life of me. I’ve used the weird google code so far, and it supports my card dimensions of 512x768. However, StableDiffusionInpaintPipeline gives me errors unless the dimensions are 512 by 512. How can I resolve this?
Thank you very much for your time. I could give more examples, but since I am new on the forums, I can only embed one image.