I’m interested in “standard” text-conditioned generation and inpainting (and/or outpainting). Is it possible to do standard generation with a model fine-tuned for inpainting? I kind of imagine that it is, since inpainting is still generation, but it does have a bit of contextual “help” in doing its job… so maybe not?
If the answer is “no”, would it help to maybe randomize the (boolean) mask_full_image
argument to the random_mask()
function, when fine-tuning with train_dreambooth_inpaint.py
?
Presumably, “inpainting” with the whole image masked would be equivalent to standard generation…