How do I fix the "RuntimeError: CUDA error: CUDA driver version is insufficient for CUDA runtime version CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect." error?

@AdamOswald1 the issue is during the inference a torch.Generator('cuda') is created while you are using a CPU hardware (so cuda is not available).

Similar to what is done L94 of the app.py you should test for cuda availability and if it is not available simply create the generator like this torch.Generator().

So as a solution you should either upgrade this space to a GPU hardware or replace L142 of app.py by:

if torch.cuda.is_available():
  generator = torch.Generator('cuda').manual_seed(seed)
else:
  generator = torch.Generator().manual_seed(seed)