Untrained models produce inconsistent outputs

More of a general question, since I think not many will want anything to do with models that aren’t trained.

It seems that, when you create a model from config, i.e. untrained, then that model will produce different results for identical inputs. I’m wondering why. Are the weights randomly initiated on each forward pass?

Code:

import torch
from transformers import AutoModel, AutoTokenizer, AutoConfig

BaseName = 'bert-base-cased'
tokenizer = AutoTokenizer.from_pretrained(BaseName)
input_ids = torch.tensor(tokenizer.encode('Hello there')).unsqueeze(0)

#Load trained model
model = AutoModel.from_pretrained(BaseName)

trained_tensor1 = model(input_ids)[0]
trained_tensor2 = model(input_ids)[0]

print('Trained tensors are the same: ',torch.eq(trained_tensor1,trained_tensor2).all())
#Prints True

#Load untrained model
config = AutoConfig.from_pretrained(BaseName)
model = AutoModel.from_config(config)

untrained_tensor1 = model(input_ids)[0]
untrained_tensor2 = model(input_ids)[0]

print('Untrained tensors are the same: ',torch.eq(untrained_tensor1,untrained_tensor2).all())
#Prints False

I’ve also tried with xlnet and got the same results.

Have you tried putting the model in evaluation mode with model.eval()? The from_pretrained class method does it for you, but not the regular init.

1 Like

Putting the model in evaluation mode solved it. Thank you very much for your reply!

So I’m guessing the models have some dropouts layers?

Yup, you can just print the model in an IDE to see its structure.

1 Like