Unmasker probabilities for all tokens in sequence

Hello, I fine tuned a masked language model using AutoModelForMaskedLM and a default DataCollatorForLanguageModeling. I would like to infer the probabilities for all vocab options (less than 50 vocab tokens) at each location of my sequence in a way that doesn’t involve iterating through each position in the test sequence, masking it, and using the pipeline to get each vocab probability.

Is there a way to obtain these probabilities in one go, rather than having to go through each token in the sequence to obtain the probabilities?