I am creating an interior design app, and I am stuck in fine-tuning stable diffusion using controlnet depth condition. But I am stuck on whether I should fine-tune only ControlNet for depth condition or to train Stable Diffusion as well for image generation on my custom dataset.
I didn’t run existing depth controlnet model because i want to train it on my own dataset to preserve layout while I try first link; it is all about the pretrained depth model to fine-tune. I got some points from the second blog, but I was stuck in loading models and files of configuration, like which file should I have to modify and so on.
For technical questions about the diffusion model, please consult the Diffusers development team on the HF Discord server, as they will be able to provide you with the most accurate information.
Recently, a vulnerability has been reported in .pth files, which may cause issues with loading and saving. As a temporary workaround, you may be able to downgrade Transformers (e.g., to version 4.48.3 or earlier). Upgrading PyTorch to version 2.6 or later is recommended.