@marton-avrios, there was a trend within abstractive summarisation benchmarks which encouraged extractive like summaries i.e. generated summaries generated existing sentences -> and were therefore naturally longer.
As suggested by @valhalla, the Xsum task was explicitly created to encourage short abstractive summaries. Because you want a multilingual model, I suggest either first finetuning mBart or T5 on Xsum, and then try applying these models to your custom data.