Is this the correct method to get probabilities?

I want to compute the probability for a specific token t given a prompt p, is this way of calculating correct? I’m extracting logits for t from the logit vector and then applying softmax. Am I missing anything?

from transformers import GPT2LMHeadModel, GPT2Tokenizer

model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')

prompt = "p"
target_token = "t"

encoded_prompt = tokenizer.encode(prompt, return_tensors='pt')
output = model(encoded_prompt)
logits = output.logits[:, -1, :]   

target_token_id = tokenizer.encode(target_token, add_special_tokens=False)
target_logit = logits[0, target_token_id[0]]   

probabilities = torch.softmax(logits, dim=-1)
target_probability = probabilities[0, target_token_id[0]]