Inference Hyperparameters

I don’t understand well…

why do you say I don’t need to provide the parameters on inference step? Does a model trained with texts truncated at 256 tokens perform good predictions on texts with more than 256 tokens?

On other hand, where should I put this tokenizer_config…?