Get vocabulary tokens in order to exclude them from generate function

I want to get the vocabulary ids of some phrases in order to exclude these ids from text generation with GPT-2.

I use AutoConfig and AutoTokenizer and when I am trying to get the ids that I want to exclude with

tokenizer(bad_word, add_prefix_space=True).input_ids

as it says in the bad_words_ids argument of [generate](Models — transformers 4.4.2 documentation) function I get the error:

_batch_encode_plus() got an unexpected keyword argument 'add_prefix_space'

Do I have to use this argument and why is this error thrown?