Why are these output different?
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('prajjwal1/bert-tiny')
model = AutoModel.from_pretrained('prajjwal1/bert-tiny', output_hidden_states=True)
print(model(**tokenizer(sent, return_tensors="pt"), output_hidden_states=True).hidden_states)
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('prajjwal1/bert-tiny')
model = AutoModelForMaskedLM.from_pretrained('prajjwal1/bert-tiny', output_hidden_states=True)
print(model(**tokenizer(sent, return_tensors="pt"), output_hidden_states=True).hidden_states)
Actually, the first two outputs are exactly the same, but what about this third code?
from transformers import AutoConfig, AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('prajjwal1/bert-tiny')
config = AutoConfig.from_pretrained('prajjwal1/bert-tiny', output_hidden_states=True)
model = AutoModelForMaskedLM.from_config(config)
print(model(**tokenizer(sent, return_tensors="pt"), output_hidden_states=True).hidden_states)