Batch generation with GPT2

Is it possible to have variable max_gen_length? depending on the length of the input sequence, for instance? (e.g. max_gen_length = len(tokenizer.tokenize(input_seq) + 20)?