Hello dear devs. I am using the stable_diffusion_controlnet_inpaint.py
(from community examples, main version) to generate a defective product with using initial image, masked image of the defect area, and two controlnet conditioning images. When I generated an image with same parameters as in SD Webui (enable_controlnetmodel_1, enable_controlnetmodel_2), the generated area of defect was totally different. My question is:
-
Are these assignments (as per examples from website and github)
controlnet = [ControlNetModel_1, ControlNetModel_2]
pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained(INPT_MODEL, controlnet=controlnet, ...)
…
pipe(... , controlnet_conditioning_image=[image_for_model1, image_for_model2]).images[0]
are correct? -
I could not find more parameter setting when creating the pipeline such as, mask blur or inpaint area (whole picture or only masked), are these possible to bet set or they have not been implemented?
How could I make the script generated image similar to SD Webui’s? If you have any idea what might be the issue, would really appreciate your help!
P.S. I have read many implementation’s of controlnet + inpaint pipelines in github, I came here as a last resort.