TFAutoModelForCausalLM vs TFGPTJForCausalLM

I am using the CasualLM thing for the next token generation tasks.
There are two APIs that I can use as follows for the next token generation.


Model-id is EleutherAI/gpt-j-6B

Both APIs can generate tokens successfully given the max_token parameter.
What are the differences between these two APIs?
Which one is better to use and Why?

Is TFBertForCasualLM available? I think that It should not be available as Bert only has encoder stacks and it can not be used to generate the next tokens. Even TFAutoModelForCausalLM will not work with bert model-id. Correct me if I am wrong here.