Confused with setting up torch_dtype while using CPU as device

hey i’m facing the similar issue for ‘cpu’ device… in app.py · nightfury/SD-InPainting at main
as no gpu - ‘Cuda’ available.

if i set torch_dtype=torch.float16,
thn it throws
RuntimeError: expected scalar type Float but found BFloat16

if i set torch_dtype=torch.bfloat16,
thn it throws
RuntimeError: expected scalar type BFloat16 but found Float,

if i set torch_dtype=torch.half,
thn it throws
RuntimeError: “LayerNormKernelImpl” not implemented for ‘Half’

if i set torch_dtype=torch.double,
thn it throws
RuntimeError: expected scalar type BFloat16 but found Double

if i set torch_dtype=torch.long,
thn it throws
raise TypeError('nn.Module.to only accepts floating point or complex ’
TypeError: nn.Module.to only accepts floating point or complex dtypes, but got desired dtype=torch.int64

so i am really confused on what torch_dtype to use for successful run.