Adding new tokens to a BERT tokenizer - Getting ValueError

I have a Python list named unique_list that contains new words that will be added to my tokenizer using tokenizer.add_tokens. However, when I run my code I’m getting the following error:

File "/home/kaan/anaconda3/envs/env_backup/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 937, in add_tokens
    if not new_tokens:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

When I tested with a test array that contains 10 random words, it worked fine but the larger unique_list is causing a problem.

What am I doing wrong here?