How the lm_head weights are tight to embeddings in GPT2LMHeadModel?

I read the source code of GPT2LMHeadModel and find out the lm_head layer actually has nothing to do with the embedding layer. (The Keras model uses shared embeddings to tie the weight).

So I’m wondering how the weights are indeed tied as noted in the document?