Upscaling an Anime Image using Diffusers

Hi community, I’m probably doing something really wrong, but, I’m trying to create an anime image using dreamlike/anime. I’m getting a low res image first to evaluate if it is ok, and then I need to upscale it.

from diffusers import StableDiffusionPipeline
import torch

generator = torch.manual_seed(100)
def get_image(prompt, model_id = "dreamlike-art/dreamlike-anime-1.0"):
  #model_id = "dreamlike-art/dreamlike-anime-1.0"
  pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
  pipe = pipe.to("cuda")
  negative_prompt = 'simple background, duplicate, retro style, low quality, lowest quality, 1980s, 1990s, 2000s, 2005 2006 2007 2008 2009 2010 2011 2012 2013, bad anatomy, bad proportions, extra digits, lowres, username, artist name, error, duplicate, watermark, signature, text, extra digit, fewer digits, worst quality,jpeg artifacts, blurry'
  low_res_latents = pipe(prompt, height=512, width=768, guidance_scale=7.5, negative_prompt=negative_prompt, output_type="latent").images
  with torch.no_grad():
    image = pipe.decode_latents(low_res_latents)
  image = pipe.numpy_to_pil(image)[0]

  image.save("a1.png")

  return low_res_latents

Then I try to upscale it and got really weird results specially in the eyes:

from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline

def upscale(image, name):
  model_id = "stabilityai/sd-x2-latent-upscaler"
  upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16)
  upscaler.to("cuda")
  upscaled_image = upscaler(
      prompt=prompt,
      image=low_res_latents,
      num_inference_steps=20,
      guidance_scale=0,
      generator=generator,
  ).images[0]
  upscaled_image.save(name)
  return upscaled_image

Also it pixelates too much the new image, am I doing something wrong??

Here are the examples of the output:
Low Res:

Upscaled x2:

Here is a good alternative :slight_smile: Improving Diffusers Package for High-Quality Image Generation | by Andrew Zhu | Towards Data Science

Hi, I meet the same problem using img2img with adetailer, do you find the way to solve this?

generated image:

Hi, i’ve found this workaround Improving Diffusers Package for High-Quality Image Generation | by Andrew Zhu | Towards Data Science with parameter strength at 0.3

I also find the reason too, thanks your attension. I forget to set : use_karras_sigmas=True, when using
DPM++ 2M Karras in diffusers.

A1111 <> Diffusers Scheduler mapping · Issue #4167 · huggingface/diffusers (github.com)

1 Like

I would try to upscale using im2img with a higher resolution.

also you can use Clarity upscaler (Clarity AI | #1 AI Image Upscaler & Enhancer) for free in automatic 1111 (sorry not diffusers) : x.com