I’m trying to run LTXVideo (for text to video or image to video) via ComfyUI, but when I run the default workflow with the default prompt, I get the following error :
CLIPTextEncode - Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
I could generate images fine with Stable Diffusion 3.5 that I tried earlier, but I’m blocked when trying to use LTX.
Any pointers on how to solve that ? Am I supposed to debug and adapt the python code from ComfyUI ?
Thanks !!