Error occurred when executing CLIPTextEncode: CUDA error: operation not supported

Hey there! I’m trying out ComfyUI after using WebUI, and when I started it with a model I often used in WebUI (majicMIX realistic 麦橘写实 - v7 | Stable Diffusion Checkpoint | Civitai), I encountered this error. Any suggestions or clues would be greatly appreciated.

This machine is a Windows-based p3.2xlarge EC2 machine with 8 NVIDIA® V100 Tensor Core GPUs. I have installed the Nvidia driver in order to run ComfyUI on the GPU.
thanks!

Error occurred when executing CLIPTextEncode:

CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\ComfyUI\nodes.py", line 58, in encode
cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 141, in encode_from_tokens
self.load_model()
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 161, in load_model
model_management.load_model_gpu(self.patcher)
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 470, in load_model_gpu
return load_models_gpu([model])
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 464, in load_models_gpu
cur_loaded_model = loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 306, in model_load
raise e
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 302, in model_load
self.real_model = self.model.patch_model(device_to=patch_model_to, patch_weights=load_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\model_patcher.py", line 281, in patch_model
self.model.to(device_to)
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1173, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
param_applied = fn(param)
^^^^^^^^^
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
return t.to(
^^^^^