Custom_loss fn for token_classification

I can’t find any examples using the num_items_in_batch parameter to the loss function that appears in the latest transformer version. I want to implement a custom loss_function that biases my model towards precision. Do i need to downgrade tranformers to 4.25.2, or are there examples anywhere of how to use this new parameter? If I just ignore it, I end up with mismatched tensor sizes.
TIA

1 Like

It is possible that the problem has already been solved in the dev version.
Sorry if this is a different question.

pip install git+https://github.com/huggingface/transformers

thanks, but I don’t think this resolves my question. That new ‘num_items_in_batch’ parameter must have to be used somehowwhen I define my loss function, but there are no indications of how to use it.

1 Like

I couldn’t find any other information…