Text Input Sequence Error

input_text = []
for i in range(0, 17880):
    text = categorical_data['company_profile'].iloc[i]
    input_text.append(text)
input_text = tuple(input_text)

tokens_tf = tokenizer.encode_plus(input_text, return_tensors='tf')
When i am running this above code I am getting type error Text Input Sequence Error. I have converted the input text to tuple to ensure multiple segments input and have checked with compatabilities. Also cleaned up with nan values with blank spaces. However the same error.
Where as when I tried the code below
Tokens = []
for i in range(0,17880):
    tokens_tf_company_profile = tokenizer.encode_plus(input_text[i],return_tensors='tf')
    Tokens.append(tokens_tf_company_profile)
The tokens are created as wished it to.
Can somebody help me understand what is the issue here

The input to a tokenizer should be either a text or list of texts.
You can pas input _text directly to tokenizer instead of type conversion to tuple.

You could also something like this and avoid loops:

Input_text = categorical_data[‘company_profile’].tolist()

tokens_tf = tokenizer(Input_text)

Thanks for the reply