Inference Hyperparameters

Okay.

Here you suggested to me to add the Truncation parameter. This is a parameter related with the tokenizer. Why it can be used as “pipeline parameter”. I’m looking for a solution like:

parameters = {
                  'truncation': True,
                  'max_length': 256,
                  'padding': True,
}

But I don’t know if the names are right or they aren’t (except Truncation, since you suggested the name). Seems that “max_length” parameter has not any effect on predictions because such predictions are exactly the same with different values.