Invert_attention_mask multiplied with -1e4 or -1e9


In the ModuleUtilsMixin class inside invert_attention_mask function, after adding dimensions to the encoder_extended_attention_mask, the mask is switched with the logic (1 - mask) and then multiplied with either -1e4 or -1e9. If we need to simply switch 1 and 0 then why do we multiply with -1e4 or -1e9 instead of (1 - mask) ?
The link to the full code:
https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py
Thank you.