I’ve found this amazing blog where GAN-based models were used to transfer style+poses of a image A to image B.
image A: real
image B: ukiyo-style art
I’m trying achieve this type of transformation using the recent diffusion mdoel. So far, using dreambooth with controlnet (straightforward) gives really promising results in my initial test. I’m using huggingface diffusers library for all experiments. (TODO: AUTOMATIC1111, sd-webui-controlnet.)
I’m looking for some advice from someone might add something here. I would really appreciate your feedback. Here are few more example from above mentioned blogs to motivate you.
Misc
-
Using controlnet, the conditioning input is critical here. The actual human face (above pic) won’t deform same as the art-style.
-
I found this artist-list that were used to train SD models. Artist (ukiyo-e) like, utamaro, kunisada are present there and model recognize them with the pretrained weight. How about ukiyo-e artist who were not pressent in the dataset!
-
This might interest you as well.