Pass tokenizer or model arguments

When I use an ASR model, I get the following message in the logs:

/opt/conda/lib/python3.9/site-packages/transformers/generation_utils.py:1296: UserWarning: Neither max_length nor max_new_tokens has been set, max_length will default to 20 (self.config.max_length). Controlling max_length via the config is deprecated and max_length will be removed from the config in v5 of Transformers ā€“ we recommend using max_new_tokens to control the maximum length of the generation.

Typically with generative models, arguments like max_length can be passed to the pipeline object to control the tokenizer or model, but my attempts to do so in a request for an ASR endpoint did not work. I know I could create a custom handler, but Iā€™m curious if there is a way to do it with the default endpoint.

1 Like