How to convert model into colored image with the image we want to test

@John6666 @Alanturner2

Guys sorry, weird questions, probably stupid ones

What do you call making these for segmentation, I want to look for the guide but I don’t know what they are doing. Is it combining masks with images?

o
Because I followed this and it only showed me how to get the mask, not applying it as that colored images

1 Like

Isn’t it simply “creating a mask for segmentation” or “creating a label for segmentation”?

But when I did it I only get the black and white, or what we’ve been called as mask, not the “colored ROI” with the image

1 Like

I’ll try it too.

Oh wait I thought it’s a well known knowledge because every tutorial I found just color their image like it’s their second nature

1 Like

I’ve already tried it. In the example above, it generates several binary masks. There is no part to be colored in the first place, so I think they are adding it with OpenCV or matplotlib. It seems like it can also be done with torchvison.
In other words, they are just drawing pictures based on the masks they have obtained. Maybe.

Related:
https://www.reddit.com/r/computervision/comments/s0vi3f/how_to_create_segmentation_masks_for_multilabel/

1 Like

Very helpful, thank you again John!

1 Like

Sorry, I’m running a bit late for the party. :blush:

  masked_image = np.where(mask_generated.astype(int),
                          np.array([0,255,0], dtype='uint8'),
                          masked_image)

  masked_image = masked_image.astype(np.uint8)

  return cv2.addWeighted(image, 0.3, masked_image, 0.7, 0)

(from first link)

Instead, couldn’t we just update the green channel by adding a constant?

2 Likes

I think the point of that code is to align both array to each other, literally covering the pic with green array that has the same shape as the mask?

Could you show some example code?

1 Like

I mean something like this:

image = np.ones((2,2,3))

array([[[1., 1., 1.],
        [1., 1., 1.]],
       [[1., 1., 1.],
        [1., 1., 1.]]])
mask = np.full((2, 2,3), False)
mask[0,0,:]=True
array([[[ True,  True,  True],
        [False, False, False]],
       [[False, False, False],
        [False, False, False]]])

running this code will add 50 to the green channel for the masked pixels.
image[mask] = np.clip(image[mask]+[0,50,0],0,255)

final image:

array([[[ 1., 51.,  1.],
        [ 1.,  1.,  1.]],
       [[ 1.,  1.,  1.],
        [ 1.,  1.,  1.]]])
1 Like

I will try this but at the time I’m done with it maybe this forum will be closed, thanks for the other way to do it mahmutc :grinning:

But on the slight look, is it working on an image (not an array level) ?

Oh stoopid me of course it would work on image.

1 Like

The first link worked with little adjustment


from transformers import pipeline

#defining model to use
semantic_segmentation_nvidia = pipeline("image-segmentation", "seand0101/segformer-b0-finetuned-ade20k-manggarai_rivergate_2")

Need to reduce the dimension with this line because the image is 3 dimensional:

 mask_expanded = np.stack([mask] * 3, axis=-1)  # Convert mask to 3 channels

Final code


import cv2
import numpy as np

# Function to apply and display the mask
def draw_mask(image, mask):
    # Ensure the mask has the same number of channels as the image
    mask_expanded = np.stack([mask] * 3, axis=-1)  # Convert mask to 3 channels
    
    # Random color for the mask
    color = [0, 255, 0]  # Green color (can be randomized if needed)

    # Apply the mask on the image
    masked_image = np.where(mask_expanded, color, image).astype(np.uint8)

    # Blend the original image and the masked image
    blended_image = cv2.addWeighted(image, 0.3, masked_image, 0.7, 0)

    return blended_image
1 Like

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.