How to eliminate bias in summarization models

Hello everyone, is there a way I can reduce bias in transformer summarization models like Pegasus and BART.

Here is the problem, i’m trying to fine-tune a summarization model to be able to generate headlines, but sometimes, the model tries to add things that are not present in the news articles into the generated headline. Like replacing “President Biden” with “President Trump”, “Covid 19” with “other older virus”.

Is there I can solve/eliminate this completely. Your responses will be very appreciated.

Thanks