Hi,
I use Mask2Former for instance segmentation with direct integration in CVAT using the standard pipeline.
I noticed that Mask2Former for instance segmentation tends to detect :
- Lots of small objects (a few pixels) that are not interesting.
- Sometimes, the mask of an object vampirizes a big part of the background.
It’s not only our models; the demo models from Facebook also do that.
Claude suggested adding “mask_size_range”: [0.05, 0.80] in config.json, but I see no effect in CVAT. I did not find this in the doc…
I can’t add a filter, as CVAT automatic integration of Mask2Former has to use the standard pipeline, or it means that I have to make a custom integration (which I would like to avoid for the time being).
Is there a way to prevent Mask2Former from creating very small and very big masks?
For big masks, I’m wondering if I shall switch to a panoptic Mask2Former so that he learns the background better. For small masks, I have no ideas unless there is some parameter for that.
Best regards