Seems Wan LoRas overriding Rapid AIO Mega v13 NSFW model

I am modifying the v3 workflow that is usually used to add NSFW LoRas when doing I2V projects.

It seems like the final product does not reference my picture at all. I am using Load Lora. Additionally, I have been doing test runs with a length of 5.

What I am doing wrong connecting the LoRas. I was able to use the Chinese Zoom Out Lora just fine with a picture of mine but I assume it’s because it’s a Lora that modifies a camera angle rather than fabricates images?

1 Like

Seems some possible causes.


Your extra Wan LoRAs are too strong / wrong type on top of a Mega checkpoint that already has LoRAs baked in, so they override the image. Use only compatible LoRAs, wire them on the model path, and keep their strength low, testing on short clips with a fixed seed.


1. Core fix in one line

Use Wan-compatible LoRAs only (Wan 2.1 or low-noise Wan 2.2),
apply them on the model path (often both high + low branches),
and keep strength modest so they “style” the video instead of replacing your I2V image.

Rapid AIO’s own README says:

  • “WAN 2.1 LORA compatibility is generally still good, along with ‘low noise’ WAN 2.2 LORA compatibility (do not use ‘high noise’ LORAs). You might need to adjust LORA strengths (up or down).” (Hugging Face)

Reddit users confirm:

  • Wan2.1 LoRAs work on Wan2.2, but Wan2.2 high-noise LoRAs do not and can wreck I2V. (Reddit)

2. Practical “short checklist” workaround

Think of this as a small recipe you can follow each time.

2.1. Start clean: confirm Mega I2V works without LoRA

  1. Load Rapid AIO Mega v3/v13 workflow.

  2. Do simple I2V:

    • Start frame connected, end frame bypassed (Mega’s documented I2V mode). (Hugging Face)
  3. No external LoRAs, default Rapid AIO settings (CFG=1, 4 steps). (Hugging Face)

  4. Make a very short test (5–10 frames).

  5. Check: does it clearly animate your picture?

    • If not, fix workflow/mode first; LoRA is not the main issue yet.

You want “baseline I2V” solid before adding any LoRA.


2.2. Use the right kind of LoRA

Only use LoRAs that say:

  • Base: Wan 2.1 or
  • Base: Wan 2.2 low-noise

Avoid:

  • Wan 2.2 high-noise LoRAs (Rapid AIO docs + community explicitly warn against them). (Hugging Face)
  • SD 1.5 / SDXL LoRAs (wrong architecture) on a Wan checkpoint.

Reason: Wan 2.2 is a Mixture-of-Experts model with separate high-noise and low-noise experts; high-noise LoRAs heavily affect global composition and easily override your start frame. (GitHub)


2.3. Wire the LoRA in the right place

In short: put the LoRA on the model, not somewhere random.

Safer pattern (ComfyUI):

  • Load Checkpoint (Rapid AIO Mega)LoRA Loader (model)ModelSamplingKSampler(s)

If your workflow exposes high/low diffusers:

  • Many users report success by putting the same Wan 2.1 LoRA on both high and low diffusers so it affects both stages. (Reddit)

In some Wan2.2 workflows (especially with Lightning LoRAs):

  • People add extra LoRAs “before the Lightning LoRA” on the model path so they blend correctly, not after all sampling. (Reddit)

If the LoRA is not actually attached to the Wan model path, it will either do nothing or behave unpredictably.


2.4. Turn LoRA strength down, then up slowly

Remember: Rapid AIO Mega NSFW merges already have multiple Wan 2.1 + Wan 2.2 LoRAs baked in at low strength. (Hugging Face)

So your external LoRA is “extra spice” on top of an already-spiced model.

Short rule:

  • Start low:

    • strength_model ~ 0.3–0.5
    • strength_clip ~ 0.6–0.8
  • Render a tiny clip (5 frames).

  • If your picture disappears → halve strength.

  • If nothing seems to change → increase a bit and re-test.

Community guides on Wan LoRA usage (and Wan 2.2 LoRA training posts) also stress tuning high/low noise strengths, not just slamming everything at 1.0. ((note Subtitles))


2.5. Keep prompts consistent with your image

For I2V with strong LoRAs:

  • Describe the same scene as your input image.
  • Avoid a prompt that screams “a totally different scene” plus a strong LoRA; that combo invites the model to redraw everything.

A Reddit answer to Wan I2V+LoRA issues puts it bluntly:

  • “You need to type what needs to happen in the scene in the Text Encoder or it’s just gonna go for something generic.” (Reddit)

So:

  • Good: “same two people at the same dinner table, warm restaurant interior, they stay seated and talk quietly, no one else enters frame”
  • Bad: “wild party, lots of people, everyone kissing passionately” plus a heavy NSFW LoRA on top of a calm dinner photograph.

2.6. Short clips for debugging

You’re already doing 5-frame tests; keep that:

  • 5–10 frames, low resolution for tuning LoRA type + strength.

  • Only when it works:

    • Increase length,
    • Increase resolution,
    • Add more LoRAs.

Reddit Wan2.2 users often prototype with short clips and 480–720p, then scale up once the behavior is right. (Reddit)


3. If you want to be extra safe

Two extra guardrails that help a lot:

  1. Use a non-NSFW Rapid AIO base for complex LoRA stacks

    • The NSFW Mega merges are explicitly described as “various spicy Wan 2.1+2.2 LoRAs at low strength… all in one solution.” (Hugging Face)

    • If you want heavy custom NSFW LoRAs, you can:

      • Use the non-NSFW Rapid AIO base plus your chosen LoRAs, or
      • Use official Wan 2.2 low-noise models as the base.
  2. Prefer “camera / motion” LoRAs when you want to keep your image identity

    • Camera-zoom LoRAs (like the Chinese Zoom Out you mentioned) mostly change framing/motion, not faces. That’s why your picture still looks like your picture.
    • Use those for camera dynamics; use character/style LoRAs carefully and at low strength when you must preserve the input photo.

4. Ultra-short “sticky note” summary

If you want a 30-second checklist pinned next to ComfyUI:

  • Make sure plain Mega I2V (no LoRA) animates your image correctly.
  • Use Wan 2.1 or low-noise Wan 2.2 LoRAs only; not high-noise. (Hugging Face)
  • Put the LoRA on the model path (often both high + low branches, or before any Lightning LoRA). (Reddit)
  • Start with low strength and nudge up; remember Mega already has LoRAs baked in. (Hugging Face)
  • Keep your prompt aligned with the photo; don’t describe a totally different scene. (Reddit)

Follow those five points and you should stop seeing LoRAs completely override your picture, while still getting their style and motion benefits.

This kind of got it working. I use gguf checkpoint instead.