Transformers CausalLM loss is always nan

Hi, transformers users, I want to fine tune LLaVa, a vision language model. I use pytorch. But the loss is always nan, which made me not be able to go further any more.

Below is a toy example of my code.

import requests
from PIL import Image

import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration

model_id = "llava-hf/llava-1.5-7b-hf"
model = LlavaForConditionalGeneration.from_pretrained(
    model_id, torch_dtype=torch.bfloat16, cache_dir="model", device_map="auto"
)

processor = AutoProcessor.from_pretrained(
    model_id, torch_dtype=torch.bfloat16, device_map="auto", cache_dir="model"
)

prompt = "USER: <image>\nWhat are these?\nASSISTANT: These are two cats lying on a pink couch"
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(prompt, raw_image, return_tensors='pt')

output = model(**inputs)
print(output.loss) #tensor(nan, grad_fn=<ToCopyBackward0>)

Can anyone help me solve this issue?

Many thanks!