Why is padding and truncation are optional?

In this page:

It says that the tokenizer’s API can have no padding or truncation as below:

* `False` or `'do_not_pad'`: no padding is applied. This is the default behavior.
* `False` or `'do_not_truncate'`: no truncation is applied. This is the default behavior.

Aren’t padding and truncation always necessary to ensure the same length for all sequences? Don’t understand why the default parameter is ‘do_not_pad’ and ‘do_not_truncate’.