BERT objective - does it include corrupted tokens? (not masked)

Hi,

I have a question about BERT’s objective. I understand that the tokens that have been replaced with MASK should be in the objective. However, there are also tokens that are corrupted or unchanged. Should these not also be in the objective? If you have a data and you replace the token id with the token id for MASK in the input, effectively this word will have a cross entropy loss term for the model. But, how can you specify corrupted words or words that are the same, do you need to add them to the objective manually somehow?