Summarizing novel-length text

From what I understand, the best solution to novel-length text summarization currently is to divide the tokenized text sequence into small chunks, use a model with a long max sequence limit like LED, and then combine the outputs of the chunks and repeat this process until the sequence length is satisfactory. Then we should decode the output sequence back to text.

But I don’t know how to actually do these steps.