I have a simple MaskedLM model with one masked token at position 7. The model returns 20.2516 and 18.0698 as loss and score respectively. However, not sure how the loss is computed from the score. I assumed the loss should be
loss = - log(softmax(score[prediction])
but computing this loss returns 0.0002. I’m confused about how the loss is computed in the model.
import copy
from transformers import BertForMaskedLM, BertTokenizerFast
import torch
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
text = "Who was Jim Paterson ? Jim Paterson is a doctor".lower()
inputs = tokenizer.encode_plus(text, return_tensors="pt", add_special_tokens = True, truncation=True, pad_to_max_length = True,
return_attention_mask = True, max_length=64)
input_ids = inputs['input_ids']
masked = copy.deepcopy(inputs['input_ids'])
masked[0][7] = 103
for t in range(len(masked[0])):
if masked[0][t] != 103:
masked[0][t] = -100
loss, scores = model(input_ids = input_ids, attention_mask = inputs['attention_mask'] , token_type_ids=inputs['token_type_ids'] , labels=masked)
print('loss',loss)
print(scores.shape)
pred = torch.argmax( scores[0][7]).item()
print("predicted token:", pred, tokenizer.convert_ids_to_tokens([pred]) )
print("score:", scores[0][7][pred])
logSoftmax = torch.nn.LogSoftmax(dim=1)
NLLLos = torch.nn.NLLLoss()
output = NLLLos( logSoftmax(torch.unsqueeze(logit[0][7], 0)), torch.tensor([pred]))
print(output)
You need to mask tokens in the input_ids not labels. And to prepare lables for masked LM set every position to -100 (ignore index) except the masked positions.
masked loss is then calculated simply using the CrossEntropy loss between the logits and labels.
So correct usage would be
text = "Who was Jim Paterson ? Jim Paterson is a doctor".lower()
inputs = tokenizer([text], return_tensors="pt")
input_ids = inputs["input_ids"]
# mask the token
input_ids[0][7] = tokenizer.mask_token_id
labels = inputs["input_ids"].clone()
labels[labels != tokenizer.mask_token_id] = -100 # only calculate loss on masked tokens
loss, logits = model(
input_ids=input_ids,
labels=labels,
attention_mask=inputs["attention_mask"],
token_type_ids=inputs["token_type_ids"]
)
# loss => 18.2054
# calculate loss manually
import torch.nn.functional as F
loss2 = F.cross_entropy(logits.view(-1, tokenizer.vocab_size), labels.view(-1))
# loss2 => 18.2054
Thanks a lot @valhalla for your reply. You’re right, I didn’t mask the tokens in input ids which is a mistake.
I also found a small mistake in your code, I think the label should be -100 everywhere expect the tokens which are masked in input ids. For those tokens, the label should have the correct id (and not mask/103) so the model knows what is the actual token. In your code, the model predicts Paterson as the correct answer, however, based on the label it thinks the correct token is actually masked token ([MASK]).
I made a small change to the code and now it works. Now the loss is 0.0056 which makes sense for a correct prediction.
import copy
from transformers import BertForMaskedLM, BertTokenizerFast
import torch
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
text = "Who was Jim Paterson ? Jim Paterson is a doctor".lower()
inputs = tokenizer.encode_plus(text, return_tensors="pt", add_special_tokens = True, truncation=True, pad_to_max_length = True,
return_attention_mask = True, max_length=64)
input_ids = inputs['input_ids']
labels = copy.deepcopy(input_ids) #this is the part I changed
input_ids[0][7] = tokenizer.mask_token_id
labels[input_ids != tokenizer.mask_token_id] = -100
loss, scores = model(input_ids = input_ids, attention_mask = inputs['attention_mask'] , token_type_ids=inputs['token_type_ids'] , labels=labels)
print('loss',loss)
pred = torch.argmax( scores[0][7]).item()
print("predicted token:", pred, tokenizer.convert_ids_to_tokens([pred]) )
print(NLLLos( logSoftmax(torch.unsqueeze(scores[0][7], 0)), torch.tensor([pred]))) #the same as F.cross_entropy(scores.view(-1, tokenizer.vocab_size), labels.view(-1))
I am completely beginner in NNs and I am currently facing the same problem.
I am trying to implement my own loss function for BERT Masked LM.
So this part of the code is the most useful for my case:
However, I do not understand how I can calculate the cross entropy loss from logits and masked token ID. How do we get the information which word was originally masked? This information is completely overlooked when calling the cross entropy function. Am I missing something?
BERT will actually predict all the tokens (everything, masked, and non-masked tokens). This is why we set the non-masked tokens equal to -100. This means not to compute loss for the non-masked tokens. the reason is the cross-entropy function ignores the inputs which are equal to -100, see here
Also, you can see this code for the code for pre-training the BERT model and understanding how the masking works.
I am also interested in this topic.
I have a question about how to generate the mask “[MASK]” for the masked token input_ids[0][7] = tokenizer.mask_token_id
I was wondering if there is a function that generates “[MASK]” on 15% of the tokens (more accurately generates “[MASK]” on 80% out of the 15%, replaces with random tokens 10% out of the 15%, leaves the tokens as they are in 10% out of the 15%) ?
Or I need to write this function myself?
Thanks,
Ayala
Hi @valhalla , thanks for the explanation. I am also interested in this topic. Would you take a look at my question? Thanks in advance.
what @sanaz wanted to point out is that:
the ground-truth label based on your code:
[-100, -100, …, 103, …, -100 ]
the ground-truth label based on sanaz’s modified code:
[-100, -100, …, token_id(7th token), …, -100 ]
I know that tokens represented by -100 will be ignored, but for the token (in this case, 7th token) to be predicted, we still use ‘103’ as the ground-truth label?
As far as I understand, this is the correct interpretation of MLM. The idea is to predict the masked token from the unmasked tokens. This is why a token is masked in the input but for the labels, the correct token is present at the position where it was masked in the input_ids.