Weird example of batching in Dataset.map document

In the document of Dataset.map (here), the example given in “Batch processing” → “Split long examples” says “Batch processing enables interesting applications such as splitting long sentences into shorter chunks and data augmentation” with the following code:

def chunk_examples(examples):
    chunks = []
    for sentence in examples["sentence1"]:
        chunks += [sentence[i:i + 50] for i in range(0, len(sentence), 50)]
    return {"chunks": chunks}
chunked_dataset = dataset.map(chunk_examples, batched=True, remove_columns=dataset.column_names)

This example looks weird to me, because it seems it’s just doing the chunking row by row, so it does not matter whether we are doing it row by row or in a batched way. Did I misunderstand anything here?

By default, map requires an input one 1 example and to output 1 example.

But a batched map can take a input batch of size N and output a batch of size M.
The code you provided indeed returns a batch with more examples than the input.