How to use negative embeddings in python with Stable Diffusion?

Hello everyone!

Could someone please help me with how can I use negative embeddings in python when generating images with Stable Diffusion XL model?

For now this is my code:

from diffusers import DiffusionPipeline
import torch

base = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    variant="fp16",
    use_safetensors=True
).to("cuda")

prompt="a parent leaning down to their child, holding their hand and nodding understandingly as the child expresses their worries and fears"

image = base(
    prompt,
    negative_prompt=
).images[0]

image

I downloaded a file of negative embeddings for bad hands from CivityAI (“bad-hands-5.pt”), but I don’t how to pass it in the negative_prompt as suggested.

Also, if anyone can give me some tricks on how to generate humans without any anomalies (like extra fingers or fused fingers etc.), please share! I tried with different negative prompts, but I still see some artifacts or undesired things that I don’t want in the generated image.

Thank you!