Hi all,
I am using this Notebook created by @valhalla to fine tune T5 model in my own classification task. I would like to apply some kind of class weighting in my loss function, since I am dealing with highly imbalanced data. I have tried this so far:
def forward(
self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, lm_labels=None
): # in lightning, forward defines the prediction/inference actions
return self.model(
input_ids,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
lm_labels=lm_labels
)
def _step(self, batch):
lm_labels = batch["target_ids"]
lm_labels[lm_labels[:, :] == self.tokenizer.pad_token_id] = -100
outputs = self(
input_ids=batch["source_ids"],
attention_mask=batch["source_mask"],
lm_labels=lm_labels,
decoder_attention_mask=batch['target_mask']
)
logits = outputs[1]
##### IMBALANCE LEARNING
class_weights = torch.FloatTensor(self.hparams.class_weights).cuda()
loss_fct = CrossEntropyLoss(ignore_index=-100, weight=class_weights)
loss = loss_fct(logits, lm_labels)
return loss
But it doesnât work. I am passing a class_weight list of two elements (the number of classes) by parameter.
I think I donât fully understand how the loss is computed using the logits and the labels.
I would appreciate any help, since I am pretty stuck.
Best,
Marcos