How to navigate model parameters to get the weight & bias values?

Once I load a model (e.g. bert-base-uncased) in AutoModel, how can I access the internal weights and bias values ? How does the internal model parameter access look like ?
I can access the encoder parameters by model.encoder.named_parameters. But how to get one of the layer’s weight values ?

Hello,
To look at the structure of a class, you can use the __dict__ method.

model = AutoModel.from_pretrained('bert-base-uncased')
print(model.__dict__)
'''
 '_state_dict_pre_hooks': OrderedDict(),
 '_load_state_dict_pre_hooks': OrderedDict(),
 '_load_state_dict_post_hooks': OrderedDict(),
 '_modules': OrderedDict([('embeddings',
               BertEmbeddings(
                 (word_embeddings): Embedding(30522, 768, padding_idx=0)
                 (position_embeddings): Embedding(512, 768)
                 (token_type_embeddings): Embedding(2, 768)
                 (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
                 (dropout): Dropout(p=0.1, inplace=False)
               )),
              ('encoder', (...)
'''

From this you understand that the weights might be saved in the _modules variable. _modules is a orderedDict, and model._modules.keys() == odict_keys(['embeddings', 'encoder', 'pooler']).
Repeating this exploration method for ever leads to this horribe piece of code that you can use to get some weights of a layer:

layer0_attention_query_weight = (
model
._modules['encoder']
._modules['layer']
._modules['0']
._modules['attention']
._modules['self']
._modules['query']
._parameters['weight']
.detach()
.numpy())

If that answers your question…