Gpt2 token of specific string

I have following texts to tokenize:

tokenizer = GPT2Tokenizer.from_pretrained(‘gpt2’, bos_token=‘<|startoftext|>’, eos_token=‘<|endoftext|>’, pad_token=‘<|paf|>’)

x =

text1 = tokenizer(‘<|startoftext|>’ + ‘Question. A’ + ‘<|endoftext|>’, truncation=True, max_length=20, padding=“max_length”)
x.append(text1[‘input_ids’])

text2 = tokenizer(‘<|startoftext|>’ + ‘Question. B’ + ‘<|endoftext|>’, truncation=True, max_length=20, padding=“max_length”)
x.append(text2[‘input_ids’])

text3 = tokenizer(‘<|startoftext|>’ + ‘Question. C’ + ‘<|endoftext|>’, truncation=True, max_length=20, padding=“max_length”)
x.append(text3[‘input_ids’])

print(x)

and this is what I got:
[[50257, 24361, 13, 317, 50256, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258], [50257, 24361, 13, 347, 50256, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258], [50257, 24361, 13, 327, 50256, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258, 50258]]

text1, text2, text3 always have the same length. Is there anyway to output the token of ‘A’, ‘B’, ‘C’? Is it right before 50256 which is eos_token i guess?