AI Interior Design Web App

Hey guys,

I want to create amazing ai web app for interior design. I will need image-to-image feature, text to image and inpainting. I have my datasets that I will train in Replicate. What’s the best workflow for an app like this? My son is Python coder so he will take care of the ‘engine’, what do we need to know? Any information or advice is helpful!

1 Like

First, decide whether to use a dedicated model such as VTON or a general-purpose diffusion model for inpainting. If using a general-purpose model, the built-in inpainting pipeline in Diffusers seems to assume a model trained specifically for inpainting, which can be cumbersome to handle. It is better to use a regular model combined with ControlNet for inpainting.Both SDXL and SD1.5 have multiple ControlNet options, so choose the one you find most suitable. While FLUX offers better image quality and prompt understanding, note that it requires significant resources for fine-tuning.

Additionally, if needed, consider using vision models like YOLO to automatically generate masks for inpainting. This is a familiar technology used in tools like ADetailer, which detects and cleanly redraws parts like fingers in image generation.

By the way, Hugging Face’s Expert Support can recommend models and optimization techniques, so it might be worth trying if the conditions are right.

I’m not sure exactly what conditions are required to use the support😅, but according to people who have actually used it, they seem to provide quite a lot of helpful information.

Thank you so so much John! I’m going to go through your advice :slight_smile: thank youuuuuuu

1 Like

John, sorry but can I ask- AI told me to do ‘proof of concept’ on Replicate, which I have carried out (and can not decide bewteen 2 models), would you say this is a good start? And then my son will do Python part?

1 Like

Yeah. Once you decide on the overall model architecture (SD 1.5, SDXL, FLUX, VTON, etc.), the code content is pretty much set, so I think it’s okay to hand it over to a Python coder at that point.
Fine-tuning the model itself can be done later or in parallel. Either way, it’s just a matter of swapping it out.

If the results aren’t satisfactory at the stage before fine-tuning, fine-tuning may not be very promising, so it might be cheaper to create a code prototype first.

1 Like

Thank you John!

1 Like

Hey! That sounds like an exciting project with lots of potential. Since your son will handle the Python side, I’d suggest focusing early on defining clear API endpoints for each feature—image-to-image, text-to-image, and inpainting—so the frontend can interact smoothly with the backend. Also, consider using cloud services that support scalable GPU instances for training and inference, especially with Replicate. For datasets, make sure they’re well-curated and balanced to get the best results. Lastly, think about user experience: quick feedback loops and easy-to-use interfaces will make a huge difference. Happy to dive deeper if you have more specific questions!

1 Like