Tokenize iterable dataset

From what I understand, it’s better to set batch size = 1 for mapping a tokenize function on an iterable dataset, right? Or rather, to process with batched = False?