Paragraph split / formatting in model output text

I am using the Inference API (NodeJS and axios) to call the bart-large-cnn model for summarization.

Is there a way to get a paragraph split (new lines) in the returned text? I always get the result in one big chunk and it’s very non-readable, especially when I increase min_length and max_length parameters and get longer text back as a result.