How to add a new token and assign corresponding weights for all layers for BERT model?

I understand that the way to add a new token and assign values to its embedding is possible with embeddings.word_embeddings.weight:

That would correspond to updating the weight for the token of the final layer.
Is it possible to assign new weights for the needed token for other layers in a similar fashion?
We can access the weights for the deeper layers when passing output_hidden_states=True:

AutoModel.from_pretrained("bert-base-uncased", output_hidden_states=True)

But these weights are not stored in a stay_dict like the weights of
the final layer stored in look-up matrix in embeddings.word_embeddings.weight.
I am not sure what am I missing here?

1 Like