MPS Tensor to float64 dtype as the MPS framework doesn't support float64 Error

I’m trying to run the example from the TRL package on a Mac M1

# imports
import torch
from transformers import AutoTokenizer
from trl import PPOTrainer, PPOConfig, AutoModelForCausalLMWithValueHead, create_reference_model
from trl.core import respond_to_batch

# get models
model = AutoModelForCausalLMWithValueHead.from_pretrained('gpt2')
model_ref = create_reference_model(model)

tokenizer = AutoTokenizer.from_pretrained('gpt2')

# initialize trainer
ppo_config = PPOConfig(
    batch_size=1,
)

# encode a query
query_txt = "This morning I went to the "
query_tensor = tokenizer.encode(query_txt, return_tensors="pt")

# get model response
response_tensor  = respond_to_batch(model, query_tensor)

# create a ppo trainer
ppo_trainer = PPOTrainer(ppo_config, model, model_ref, tokenizer)

# define a reward for response
# (this could be any reward such as human feedback or output from another model)
reward = [torch.tensor(1.0)]

# train model for one step with ppo
train_stats = ppo_trainer.step([query_tensor[0]], [response_tensor[0]], reward)

I am getting a lot of issues with running Mac’s MPS backend to run this code. I have tried to set device = ‘cpu’ instead and added
.to(device) or set device = device to the model, tokenizer, query tensor, response tensor and reward, but am getting the error below:

MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.

If I’m not using the MPS Tensor, why is this being shown?