When i use GPT2LMHeadModel, why it have a keys to ignore on load missing in pytorch version,while it doesn't have in tensorflow2 version?

When i use GPT2LMHeadModel, why it have a _keys_to_ignore_on_load_missing in pytorch version,while it doesn’t have in tensorflow2 version? Is that mean TFGPT2LMHeadModel and GPT2LMHeadModel is different?