[unused] tokens in predicting with MLM model

I did domain adaptation over my dataset using BERT using MLM model.
When I wanted to test the model with some [MASK] tokens, it returned some [unused177] tokens as top_k. So, where this problem come from? Is that related to tokenizer?

from transformers import DistilBertTokenizer
from torch.nn import functional as F

tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = trainer.model.base_model # from Trainer class

text = "I am going to get " + tokenizer.mask_token + " vaccine"
input1 = tokenizer.encode_plus(text, return_tensors = "pt")

mask_index = torch.where(input1["input_ids"][0] == tokenizer.mask_token_id)

logits = model(**input1)
logits = logits.last_hidden_state

softmax = F.softmax(logits, dim = -1)
print(softmax.shape)
print(mask_index)
mask_word = softmax[0, mask_index, :]
top_word = torch.argmax(mask_word, dim=1)
print(tokenizer.decode(top_word))