Confused with setting up torch_dtype while using CPU as device

hey iā€™m facing the similar issue for ā€˜cpuā€™ deviceā€¦ in app.py Ā· nightfury/SD-InPainting at main
as no gpu - ā€˜Cudaā€™ available.

if i set torch_dtype=torch.float16,
thn it throws
RuntimeError: expected scalar type Float but found BFloat16

if i set torch_dtype=torch.bfloat16,
thn it throws
RuntimeError: expected scalar type BFloat16 but found Float,

if i set torch_dtype=torch.half,
thn it throws
RuntimeError: ā€œLayerNormKernelImplā€ not implemented for ā€˜Halfā€™

if i set torch_dtype=torch.double,
thn it throws
RuntimeError: expected scalar type BFloat16 but found Double

if i set torch_dtype=torch.long,
thn it throws
raise TypeError('nn.Module.to only accepts floating point or complex ā€™
TypeError: nn.Module.to only accepts floating point or complex dtypes, but got desired dtype=torch.int64

so i am really confused on what torch_dtype to use for successful run.

1 Like