Also, I’ve tried wrapping it inside the BertTokenizerFast
object and calling in in the following way:
new_tokenizer = BertTokenizerFast(tokenizer_object=tokenizer, max_len=1024)
new_tokenizer.encode_plus(example_str, padding=True, truncation=True, add_special_tokens=True)
It still doesn’t seem to work