Chapter.6 - Why are the tokens and word_ids for 2nd sentence are not returned?

I am trying extract tokens and word_ids of two sentences after tokenzing a pre-trained Autotokenizer as follows. Why is that I am getting the tokens and word_ids only for the first sentence? Appreciate inputs.

tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")

example = ["My name is Srinivas and I am self-employed.", "I work as Data Science and Machine Learning Scientist in Dubai."]

I figured out I can extract the tokens or word_ids for each sentence passing its index (here either 0 or 1) to the respective functions as shown below. I would like to know how I can get these values simultaneously. Is there any documentation. I searched HF docs, but could not find any.

encoding = tokenizer(example)


['[CLS]', 'I', 'work', 'as', 'Data', 'Science', 'and', 'Machine', 'Learning', 'Scientist', 'in', 'Dubai', '.', '[SEP]']
[None, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, None]