Adding control net to a SDXL LCM pipeline?

New to Mac and the Diffusers library.

Is it possible to connect a ControlNet while still benefiting from the LCM generation speedup?
How would I wire that?

Here is the code, running without the controlnet:

from diffusers import DiffusionPipeline, LCMScheduler
import torch
import uuid

model_id = "stabilityai/stable-diffusion-xl-base-1.0"
lcm_lora_id = "latent-consistency/lcm-lora-sdxl"

pipe = DiffusionPipeline.from_pretrained(model_id, variant="fp16")
pipe.enable_attention_slicing()

pipe.load_lora_weights(lcm_lora_id)
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.to("mps")


prompts = [
    "Translucent plastics reveal prismatic mysteries from mundane toothbrush handles, scattering vibrant colors in a dance of caustic reflections, as light playfully glows and fades",
    "In the gleaming labyrinth of mirrored facets on a wristwatch face, color bleeds into vivid pools, swept by the tidal currents of radiant caustics and shimmering diffusion",
    "Sculpted from materials inside an ordinary pencil eraser, the iridescent cascade gives way to wondrous patterns of scattering light, while fluid shadows paint vibrant illusions"
]

for i, prompt in enumerate(prompts):
    result = pipe(
        prompt=prompt + "",
        num_inference_steps=4,
        guidance_scale=1,
    )

    image = result.images[0]
    image.save(f"generated_image_{i}.jpg")