Using an inpainting model for standard generation

I’m interested in “standard” text-conditioned generation and inpainting (and/or outpainting). Is it possible to do standard generation with a model fine-tuned for inpainting? I kind of imagine that it is, since inpainting is still generation, but it does have a bit of contextual “help” in doing its job… so maybe not?

If the answer is “no”, would it help to maybe randomize the (boolean) mask_full_image argument to the random_mask() function, when fine-tuning with train_dreambooth_inpaint.py?

Presumably, “inpainting” with the whole image masked would be equivalent to standard generation…

Just to note, I don’t actually have classes for my data, so Dreambooth might not make much sense. Is there much else to inpainting training than integrating the random masks? I mean, could I hack that into the normal train_text_to_image.py script and have basically the same effect?