GPT-J weights on HuggingFace

The GPT-J model which is available on HuggingFace - is this a full weight model or slim weight? Any idea?

You can use both. The default one is the full version (float32). To use the slim version, you can do the following:

from transformers import GPTJForCausalLM
import torch

model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", torch_dtype=torch.float16, low_cpu_mem_usage=True)

Sure. Thanks for the help.