Okay, I came up with a solution, so I’ll share it in case someone needs it in the future.
Right after the first fine tuning, you got to run this:
StableDiffusionPipeline.from_pretrained(OUTPUT_DIR, torch_dtype=torch.float16).save_pretrained(OUTPUT_DIR)
Then you can use DreamBooth again with pretrained_model_name_or_path=$OUTPUT_DIR.
However, I noticed something that other people have already pointed out about using DreamBooth more than once. Each time you use it, all the model is adjusted to generate the current instance, so it not only loses generalization, but it also gets worse at producing the instances previously added with DreamBooth. I read that an alternative is to use Textual Inversion with DreamBooth, but I haven’t tested it yet.