Maskformer loss , finetuning with weighted loss

Im finetuning maskFormer on a custom semantic segmentation dataset
from transformers import MaskFormerForInstanceSegmentation

model = MaskFormerForInstanceSegmentation.from_pretrained(“facebook/maskformer-swin-base-ade”,
id2label=id2label,
ignore_mismatched_sizes=True)
and this is how I get the loss
outputs = model(
pixel_values=batch[“pixel_values”].to(dtype=torch.float32).to(device),
mask_labels=[labels.to(device) for labels in batch[“mask_labels”]],
class_labels=[labels.to(device) for labels in batch[“class_labels”]],
)

Backward propagation

loss = outputs.loss
the problem is the labels are small so the model predicts most of the image as one class and almost None of the other , i want to make the loss weighted or give more weight to the dice loss , i tried changing it from the model config but it still didn’t work
model.config.dice_weight =10.0