After output of model? Get meaning full information

I am trying to get meaning full or text output with the model without using pipeline inference.

Considering the tutorial of the “behind the pipeline”, gives a classification example.

Link - Tutorial

But I am trying to use MASK flling model.

Code:

from transformers import BertTokenizer, BertForMaskedLM

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

model = BertForMaskedLM.from_pretrained("bert-base-uncased")

text = "Replace me by [MASK] text you'd like."

encoded_input = tokenizer(text, return_tensors='pt')

output = model(input_ids=encoded_input['input_ids'], attention_mask=encoded_input['attention_mask'])

After that how can I get meaningful details?

output give following details

MaskedLMOutput(loss=None, logits=tensor([[[ -6.9080, -6.8627, -6.8496, …, -6.1739, -6.0125, -4.3861],
[-12.3396, -12.1704, -12.1169, …, -11.8522, -10.8242, -12.1897],
[-12.1175, -12.0443, -12.0125, …, -10.6335, -11.2715, -10.5266],
…,
[ -9.8642, -9.8000, -9.8748, …, -9.4508, -10.7953, -8.7879],
[-11.9032, -11.5505, -11.9132, …, -9.1630, -11.2406, -6.3046],
[-12.4441, -13.5939, -12.4144, …, -12.4859, -11.4275, -9.7487]]],
grad_fn=), hidden_states=None, attentions=None)

According to tutorial,

import torch
predictions = torch.nn.functional.softmax(output.logits, dim=-1)
print(predictions)

Output:

tensor([[[3.9761e-08, 4.2633e-08, 4.8522e-08, …, 1.3461e-07,
9.0488e-08, 1.2941e-06],
[1.0643e-08, 1.1875e-08, 1.1567e-08, …, 2.5163e-08,
3.4508e-08, 1.4164e-07],
[7.9932e-13, 7.9141e-13, 9.5979e-13, …, 3.7303e-12,
1.3634e-12, 1.2097e-11],
…,
[1.0588e-12, 1.3337e-12, 9.7095e-13, …, 6.9976e-13,
3.8466e-13, 2.0040e-11],
[3.9985e-16, 4.6984e-16, 3.5347e-16, …, 3.6423e-15,
4.9108e-16, 8.5845e-14],
[2.6691e-11, 9.6144e-12, 2.3595e-11, …, 5.1817e-11,
3.7737e-11, 5.8767e-10]]], grad_fn=)

This not give classification output like a tutorial but how to get text meaningful output?

Considering tutorial it give {0: 'LABEL_0', 1: 'LABEL_1'} for model.config.id2label