AI Interior Design Web App

First, decide whether to use a dedicated model such as VTON or a general-purpose diffusion model for inpainting. If using a general-purpose model, the built-in inpainting pipeline in Diffusers seems to assume a model trained specifically for inpainting, which can be cumbersome to handle. It is better to use a regular model combined with ControlNet for inpainting.Both SDXL and SD1.5 have multiple ControlNet options, so choose the one you find most suitable. While FLUX offers better image quality and prompt understanding, note that it requires significant resources for fine-tuning.

Additionally, if needed, consider using vision models like YOLO to automatically generate masks for inpainting. This is a familiar technology used in tools like ADetailer, which detects and cleanly redraws parts like fingers in image generation.