It’s hard to know without knowing how you trained/finetuned the model at that point. Performance on incoming data that does not resemble the training one is likely to not perform well.
All I was mentioning was that truncating is sometimes a valid strategy (especially if your input is simply slightly above available your actual max_length, something like < 1.1 * max_length). The overall summary quality is better than doing summarization on a very small chunk (< 0.1 max_length) which is mostly likely to simply repeat the input leading to a good summary concatenated with the end of the article.
As always the best way is still to try different options and see what works best for your use case on your data.