How to convert .safetensors SDXL checkpoint to onnx?

Hello,

I’m attempting to convert an SDXL .safetensors checkpooint to a Onnx model, how can i do that?

I’ve already tried optimum cli and other method using optimum and SDXL pipelines. all of them fail in finding model.safetensors file path one way or another.

1 Like

The Optimum-CLI converter is designed to handle Diffusers directory structures rather than standalone safetensors files (for ComfyUI, A1111 WebUI, etc.). Therefore, if you want to convert standalone safetensors files to ONNX, it’s safer to convert them to the Diffusers format first.


Do it in two steps: first turn the single SDXL .safetensors into a Diffusers folder, then export that folder to ONNX. Optimum fails with “model.safetensors not found” because it expects the Diffusers directory layout (unet/, vae/, text_encoder/, text_encoder_2/, tokenizers, scheduler) rather than a single checkpoint file. (Hugging Face)

Background you need

  • Single-file SDXL (A1111/ComfyUI style) bundles UNet, VAE, and text encoders into one .safetensors.
  • Diffusers layout splits these into subfolders and a model_index.json. Optimum’s ONNX exporter and ORT pipelines operate on this layout. See the SDXL base repo’s Files tab for the exact folder names. (Hugging Face)
  • from_single_file is the supported loader to convert a single checkpoint into a Diffusers pipeline object you can save to disk. (Hugging Face)
  • ONNX export is provided by Optimum. Use optimum-cli export onnx or load an ORT pipeline with export=True. (Hugging Face)

Step-by-step: single .safetensors → Diffusers folder → ONNX

0) Environment

# Python 3.10+ recommended
python -m venv venv && source venv/bin/activate

pip install -U "diffusers>=0.28" transformers accelerate safetensors
pip install -U "optimum[onnx]" onnx onnxruntime  # GPU? add onnxruntime-gpu
# docs:
# - diffusers single-file loader: https://huggingface.co/docs/diffusers/en/api/loaders/single_file
# - optimum ONNX export:       https://huggingface.co/docs/optimum-onnx/onnx/usage_guides/export_a_model

diffusers>=0.28 has the current from_single_file behavior and SDXL fixes. Optimum provides the ONNX exporter and ORT pipelines. (Hugging Face)

1) Convert the single checkpoint to a Diffusers folder

Python, minimal and robust

# https://huggingface.co/docs/diffusers/en/api/loaders/single_file
from diffusers import StableDiffusionXLPipeline

ckpt_path = "your_sdxl_model.safetensors"  # local .safetensors
pipe = StableDiffusionXLPipeline.from_single_file(
    ckpt_path,
    use_safetensors=True,
    torch_dtype="auto",   # or torch.float16 if the file is fp16
    # variant="fp16",     # add if your file is named or saved as fp16
)
pipe.save_pretrained("sdxl_diffusers")  # writes unet/, vae/, text_encoder/, text_encoder_2/, tokenizer*/..., model_index.json

This writes the exact layout exporters need. If you work offline, from_single_file may still try to fetch configs. Either run once online or pass a reference repo via config="stabilityai/stable-diffusion-xl-base-1.0" so it uses known configs locally. (Hugging Face)

Alternative: official converter script

# https://raw.githubusercontent.com/huggingface/diffusers/main/scripts/convert_original_stable_diffusion_to_diffusers.py
python convert_original_stable_diffusion_to_diffusers.py \
  --checkpoint_path your_sdxl_model.safetensors \
  --dump_path sdxl_diffusers \
  --pipeline_class_name StableDiffusionXLPipeline \
  --from_safetensors --to_safetensors --extract_ema

Use the SDXL pipeline class. If you hit script-specific bugs, pin a compatible diffusers version or prefer the from_single_file path above. (GitHub)

Sanity-check the result
Your folder should look like the SDXL base model’s repo: subfolders unet/, vae/, text_encoder/, text_encoder_2/, tokenizer/, tokenizer_2/, scheduler/, plus model_index.json. That’s what Optimum resolves internally. (Hugging Face)

2) Export the Diffusers folder to ONNX

CLI export (recommended)

# https://huggingface.co/docs/optimum-onnx/onnx/usage_guides/export_a_model
optimum-cli export onnx \
  -m ./sdxl_diffusers \
  --task stable-diffusion-xl \
  sdxl_onnx/

This emits ONNX files for UNet, VAE encoder/decoder, and both text encoders, alongside configs that the ORT pipelines read. (Hugging Face)

Python export-on-load (same result, avoids CLI)

# https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0 (model card shows export=True usage)
from optimum.onnxruntime import ORTStableDiffusionXLPipeline
pipe = ORTStableDiffusionXLPipeline.from_pretrained("sdxl_diffusers", export=True)
pipe.save_pretrained("sdxl_onnx")

This converts and saves an ONNX-ready folder in one call. (Hugging Face)

3) Use the ONNX model

# https://huggingface.co/docs/optimum-onnx/onnxruntime/package_reference/modeling_diffusion
from optimum.onnxruntime import ORTStableDiffusionXLPipeline
pipe = ORTStableDiffusionXLPipeline.from_pretrained("sdxl_onnx")
img = pipe("a photo of a red fox, 1024x1024", num_inference_steps=30).images[0]
img.save("out.png")

For img2img or inpaint, use the corresponding ORT pipelines. (Hugging Face)


Why your attempts failed

  • You likely passed the .safetensors file to optimum-cli export onnx. The exporter walks a Diffusers directory and expects files like text_encoder/model.safetensors. With a single checkpoint, those files do not exist yet, so it errors with “model.safetensors not found”. Convert first. (Hugging Face)
  • Some environments run from_single_file offline. The loader then complains about missing local configs. Add config="stabilityai/stable-diffusion-xl-base-1.0" or run once online to populate configs. (GitHub)
  • Older converter scripts had SDXL-specific breakages (text_encoder_2 arg, etc.). Use current diffusers or the from_single_file path. (GitHub)

Variants and options

  • Refiner: Convert and export the refiner checkpoint separately if you use the two-stage SDXL setup. Load with ORTStableDiffusionXLImg2ImgPipeline for refinement. (Hugging Face)
  • Inpaint / img2img: If your single checkpoint is for inpainting or img2img, pass the correct pipeline class to the script or load the right ORT pipeline after export. (Hugging Face)
  • Precision: If the original file is fp16, set torch_dtype=torch.float16 and optionally variant="fp16" when loading, then export. ONNX Runtime also supports mixed precision and post-export optimization if you want smaller graphs. (Hugging Face)

Quick end-to-end you can paste

Convert

# https://huggingface.co/docs/diffusers/en/api/loaders/single_file
from diffusers import StableDiffusionXLPipeline
pipe = StableDiffusionXLPipeline.from_single_file("your_sdxl.safetensors", use_safetensors=True)
pipe.save_pretrained("sdxl_diffusers")

Export

# https://huggingface.co/docs/optimum-onnx/onnx/usage_guides/export_a_model
optimum-cli export onnx -m sdxl_diffusers --task stable-diffusion-xl sdxl_onnx/

Infer

# https://huggingface.co/docs/optimum-onnx/onnxruntime/package_reference/modeling_diffusion
from optimum.onnxruntime import ORTStableDiffusionXLPipeline
pipe = ORTStableDiffusionXLPipeline.from_pretrained("sdxl_onnx")
pipe("an astronaut riding a green horse, 1024x1024").images[0].save("astronaut.png")

(Hugging Face)


Troubleshooting checklist

  • Exporter path: Pass the Diffusers folder to -m, not the single .safetensors. The folder must contain model_index.json, unet/, vae/, text_encoder/, text_encoder_2/, tokenizers, and scheduler/. Compare with the SDXL base repo’s structure. (Hugging Face)
  • Offline configs: If from_single_file says it cannot find local configs, supply config="stabilityai/stable-diffusion-xl-base-1.0" or run once online. (GitHub)
  • Script hiccups: If the official converter fails on your environment, prefer from_single_file, which is the primary path maintained in docs. (Hugging Face)
  • Pipelines mismatch: For inpaint/img2img, use the corresponding pipeline class during conversion or load time. The ORT API exposes ORTStableDiffusionXLImg2ImgPipeline and ORTStableDiffusionXLInpaintPipeline. (Hugging Face)

Short, high-signal references

  • from_single_file loader for SD/SDXL single checkpoints. Usage and config hints. (Hugging Face)
  • Optimum ONNX export guide. CLI and Python, current flags. (Hugging Face)
  • ORT SDXL pipelines. Classes and expected ONNX folder layout at load. (Hugging Face)
  • SDXL base repo “Files”. Shows the exact Diffusers layout the exporter expects. Useful to compare your converted folder. (Hugging Face)
  • Known SDXL conversion issues. Context for script breakages and the safer loader path. (GitHub)
1 Like