Fixing jagged edges in clothes parsing (SegFormer + RankSEG)

Hey everyone,

If you’re using the segformer_b2_clothes model for downstream tasks like virtual try-on or outfit editing, you might have noticed that the standard argmax output can leave jagged, pixelated boundaries—especially around complex items like bags, skirts, and scarves.

I recently experimented with replacing the argmax step with RankSEG, a metric-aware post-processing solver from NeurIPS 2025, and the visual difference is night and day. Best part? No retraining required.

The Hard Numbers (Tested on 500 images from the ATR dataset):

  • Global mDice: Improved from 75.54% to 77.53%

  • Notable improvements in complex categories:

    • Belt: +10.49%

    • Skirt: +3.59%

    • Bag: +2.60%

If you’re curious or want to verify these results, I’ve packaged the entire pipeline into a self-contained Colab notebook. You can test it on your own images immediately:

Also, here’s the GitHub - rankseg/rankseg: Boost segmentation model mIoU/Dice instantly WITHOUT retraining. A plug-and-play, training-free optimization module. Published in NeurIPS & JMLR. Compatible with SAM, DeepLab, SegFormer, and more. 🧩 · GitHub for the RankSEG implementation, so you can explore it further.

I hope this helps anyone struggling with boundary artifacts! Feel free to try it out and let me know what you think.

Cheers,
Zhao Qingyang

1 Like