Is it possible to get logits when using gpt-j in float16 precision

Below it the code to load the GPTJ model in float-16 precision. I am unable to figure out how to get the logits value here. However when loading in float-32 precision using the following line of code model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") it was working fine. Can someone suggest what to do? The original model weights is becoming too large to load in gpu and causing out of memory error. This is the link to doc.

from transformers import GPTJForCausalLM, AutoTokenizer
import torch

model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")

prompt = (
    "In a shocking finding, scientists discovered a herd of unicorns living in a remote, "
    "previously unexplored valley, in the Andes Mountains. Even more surprising to the "
    "researchers was the fact that the unicorns spoke perfect English."
)

input_ids = tokenizer(prompt, return_tensors="pt").input_ids

gen_tokens = model.generate(
    input_ids,
    do_sample=True,
    temperature=0.9,
    max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]