Using negative prompts with a batch size > 1 ([prompt]*x , [negative]*x in their respective ways) causes the image to be more towards the negative prompt than the normal prompt.
I can link the notebook I’m using if necessary but the code here is so simple that I feel like I don’t need to.
(seed 23335, w/h = 512, prompt = “frog”, negative = “bird”, x = 2, guidance_scale = 7.5, steps = 150, using the lms = LMSDiscreteScheduler( beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear" ))
edit: this is fixed by passing both as a single string, not using x or using it as a list, but passing “num_images_per_prompt=x” when calling it, leaving this here in case other people find the same problem (same settings just using num_images_per_prompt=2 instead of making the prompt a list and multiplying by 2: https://media.discordapp.net/attachments/1024588665596411944/1028162978689847396/unknown.png )
I think this line might have something to do with it?
uncond embeddings will already be a list (the negative prompts) so repeating them again won’t help, then in the line after where it’s putting the negatives with the positives, the model is only seeing the negatives (for both the uncond and cond input). Could be wrong but with a quick glance that’s what I think’s happening.
edit: oops you already said something like this in the issue, cheers!