Can't change max_input_length of Text Generation Inference

It appears that the TGI CLI options can be passed to the docker command in the TGI docker launch script (for example this works for --model-id and --max-total-tokens and --quantize). However, when I try to specify --max-input-tokens, I get this error:

error: unexpected argument ‘–max-input-tokens’ found