Gertie01/app-pzeyhe-14:Report

Exit code: 1. Reason: /usr/local/lib/python3.10/site-packages/torch/amp/autocast_mode.py:266: UserWarning: User provided device_type of ‘cuda’, but CUDA is not available. Disabling warnings.warn( Traceback (most recent call last): File “/app/app.py”, line 2, in import models File “/app/models.py”, line 4, in from ip_adapter import IPAdapter File “/usr/local/lib/python3.10/site-packages/ip_adapter/_init_.py”, line 1, in from .ip_adapter import IPAdapter, IPAdapterPlus, IPAdapterPlusXL, IPAdapterXL, IPAdapterFull File “/usr/local/lib/python3.10/site-packages/ip_adapter/ip_adapter.py”, line 25, in from .resampler import Resampler File “/usr/local/lib/python3.10/site-packages/ip_adapter/resampler.py”, line 8, in from einops import rearrange ModuleNotFoundError: No module named ‘einops’

Container logs:

===== Application Startup at 2025-10-24 12:52:24 =====

/usr/local/lib/python3.10/site-packages/torch/amp/autocast_mode.py:266: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn(
Traceback (most recent call last):
  File "/app/app.py", line 2, in <module>
    import models
  File "/app/models.py", line 4, in <module>
    from ip_adapter import IPAdapter
  File "/usr/local/lib/python3.10/site-packages/ip_adapter/__init__.py", line 1, in <module>
    from .ip_adapter import IPAdapter, IPAdapterPlus, IPAdapterPlusXL, IPAdapterXL, IPAdapterFull
  File "/usr/local/lib/python3.10/site-packages/ip_adapter/ip_adapter.py", line 25, in <module>
    from .resampler import Resampler
  File "/usr/local/lib/python3.10/site-packages/ip_adapter/resampler.py", line 8, in <module>
    from einops import rearrange
ModuleNotFoundError: No module named 'einops'
/usr/local/lib/python3.10/site-packages/torch/amp/autocast_mode.py:266: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn(
Traceback (most recent call last):
  File "/app/app.py", line 2, in <module>
    import models
  File "/app/models.py", line 4, in <module>
    from ip_adapter import IPAdapter
  File "/usr/local/lib/python3.10/site-packages/ip_adapter/__init__.py", line 1, in <module>
    from .ip_adapter import IPAdapter, IPAdapterPlus, IPAdapterPlusXL, IPAdapterXL, IPAdapterFull
  File "/usr/local/lib/python3.10/site-packages/ip_adapter/ip_adapter.py", line 25, in <module>
    from .resampler import Resampler
  File "/usr/local/lib/python3.10/site-packages/ip_adapter/resampler.py", line 8, in <module>
    from einops import rearrange
ModuleNotFoundError: No module named 'einops'

@John6666

1 Like

Here: App Pzeyhe 14 - a Hugging Face Space by Gertie01

1 Like

Will you help?

@John6666

1 Like

The error message itself is the biggest clue, so if you just put the error message into a generative AI or search engine, you’ll find a fix about half the time…
For now, add einops to requirements.txt like this:

gradio
torch
git+https://github.com/huggingface/diffusers
git+https://github.com/huggingface/transformers
accelerate
Pillow
safetensors
xformers
spaces
ip-adapter
einops

Exit code: 1. Reason: usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn return fn(*args, **kwargs) File “/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py”, line 1575, in get_hf_file_metadata response = _httpx_follow_relative_redirects(method=“HEAD”, url=url, headers=hf_headers, timeout=timeout) File “/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py”, line 291, in _httpx_follow_relative_redirects hf_raise_for_status(response) File “/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_http.py”, line 550, in hf_raise_for_status raise _format(RemoteEntryNotFoundError, message, response) from e huggingface_hub.errors.RemoteEntryNotFoundError: 404 Client Error. (Request ID: Root=1-69091477-318d4d7e58d07f3f413dfe30;65f35add-57a6-4134-91e1-cca3ef0c5cdd) Entry Not Found for url: https://huggingface.co/stabilityai/sdxl-vae/resolve/6f5909a7e596173e25d4e97b07fd19cdf9611c76/diffusion_pytorch_model.fp16.bin. The above exception was the direct cause of the following exception: Traceback (most recent call last): File “/app/app.py”, line 2, in import models File “/app/models.py”, line 97, in load_and_compile_models() File “/app/models.py”, line 38, in load_and_compile_models pipe_global.vae = AutoencoderKL.from_pretrained( File “/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py”, line 89, in _inner_fn return fn(*args, **kwargs) File “/usr/local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py”, line 1214, in from_pretrained resolved_model_file = _get_model_file( File “/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py”, line 89, in _inner_fn return fn(*args, **kwargs) File “/usr/local/lib/python3.10/site-packages/diffusers/utils/hub_utils.py”, line 317, in _get_model_file raise EnvironmentError( OSError: stabilityai/sdxl-vae does not appear to have a file named diffusion_pytorch_model.fp16.bin.

Container logs:

===== Application Startup at 2025-11-03 20:44:13 =====

/usr/local/lib/python3.10/site-packages/torch/amp/autocast_mode.py:266: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn(
🚀 Starting model loading and compilation...
Loading SDXL base model: stabilityai/stable-diffusion-xl-base-1.0


model_index.json:   0%|          | 0.00/609 [00:00<?, ?B/s]
model_index.json: 100%|██████████| 609/609 [00:00<00:00, 2.71MB/s]


scheduler_config.json:   0%|          | 0.00/479 [00:00<?, ?B/s]
scheduler_config.json: 100%|██████████| 479/479 [00:00<00:00, 2.34MB/s]


config.json:   0%|          | 0.00/565 [00:00<?, ?B/s]
config.json: 100%|██████████| 565/565 [00:00<00:00, 3.82MB/s]


text_encoder/model.fp16.safetensors:   0%|          | 0.00/246M [00:00<?, ?B/s]

text_encoder/model.fp16.safetensors: 100%|██████████| 246M/246M [00:01<00:00, 245MB/s]
text_encoder/model.fp16.safetensors: 100%|██████████| 246M/246M [00:01<00:00, 245MB/s]


config.json:   0%|          | 0.00/575 [00:00<?, ?B/s]
config.json: 100%|██████████| 575/575 [00:00<00:00, 3.28MB/s]


text_encoder_2/model.fp16.safetensors:   0%|          | 0.00/1.39G [00:00<?, ?B/s]

text_encoder_2/model.fp16.safetensors:  10%|▉         | 134M/1.39G [00:01<00:09, 131MB/s]

text_encoder_2/model.fp16.safetensors:  95%|█████████▌| 1.32G/1.39G [00:02<00:00, 703MB/s]
text_encoder_2/model.fp16.safetensors: 100%|██████████| 1.39G/1.39G [00:02<00:00, 637MB/s]


merges.txt:   0%|          | 0.00/525k [00:00<?, ?B/s]
merges.txt: 100%|██████████| 525k/525k [00:00<00:00, 48.0MB/s]


special_tokens_map.json:   0%|          | 0.00/472 [00:00<?, ?B/s]
special_tokens_map.json: 100%|██████████| 472/472 [00:00<00:00, 3.12MB/s]


tokenizer_config.json:   0%|          | 0.00/737 [00:00<?, ?B/s]
tokenizer_config.json: 100%|██████████| 737/737 [00:00<00:00, 3.45MB/s]


vocab.json:   0%|          | 0.00/1.06M [00:00<?, ?B/s]
vocab.json: 100%|██████████| 1.06M/1.06M [00:00<00:00, 24.3MB/s]


special_tokens_map.json:   0%|          | 0.00/460 [00:00<?, ?B/s]
special_tokens_map.json: 100%|██████████| 460/460 [00:00<00:00, 2.20MB/s]


tokenizer_config.json:   0%|          | 0.00/725 [00:00<?, ?B/s]
tokenizer_config.json: 100%|██████████| 725/725 [00:00<00:00, 4.20MB/s]


config.json:   0%|          | 0.00/1.68k [00:00<?, ?B/s]
config.json: 100%|██████████| 1.68k/1.68k [00:00<00:00, 3.48MB/s]


unet/diffusion_pytorch_model.fp16.safete(…):   0%|          | 0.00/5.14G [00:00<?, ?B/s]

unet/diffusion_pytorch_model.fp16.safete(…):   1%|          | 41.8M/5.14G [00:02<05:21, 15.9MB/s]

unet/diffusion_pytorch_model.fp16.safete(…):   2%|▏         | 109M/5.14G [00:04<03:01, 27.7MB/s] 

unet/diffusion_pytorch_model.fp16.safete(…):  10%|▉         | 511M/5.14G [00:06<00:49, 93.6MB/s]

unet/diffusion_pytorch_model.fp16.safete(…):  32%|███▏      | 1.65G/5.14G [00:07<00:10, 325MB/s]

unet/diffusion_pytorch_model.fp16.safete(…):  53%|█████▎    | 2.72G/5.14G [00:08<00:04, 501MB/s]

unet/diffusion_pytorch_model.fp16.safete(…):  67%|██████▋   | 3.46G/5.14G [00:09<00:03, 556MB/s]
unet/diffusion_pytorch_model.fp16.safete(…): 100%|██████████| 5.14G/5.14G [00:10<00:00, 480MB/s]


config.json:   0%|          | 0.00/642 [00:00<?, ?B/s]
config.json: 100%|██████████| 642/642 [00:00<00:00, 3.11MB/s]


vae/diffusion_pytorch_model.fp16.safeten(…):   0%|          | 0.00/167M [00:00<?, ?B/s]
vae/diffusion_pytorch_model.fp16.safeten(…): 100%|██████████| 167M/167M [00:00<00:00, 180MB/s]


vae_1_0/diffusion_pytorch_model.fp16.saf(…):   0%|          | 0.00/167M [00:00<?, ?B/s]
vae_1_0/diffusion_pytorch_model.fp16.saf(…): 100%|██████████| 167M/167M [00:00<00:00, 187MB/s]


Loading pipeline components...:   0%|          | 0/7 [00:00<?, ?it/s]

Loading pipeline components...:  86%|████████▌ | 6/7 [00:02<00:00,  2.87it/s]
Loading pipeline components...: 100%|██████████| 7/7 [00:02<00:00,  3.24it/s]
/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:202: UserWarning: The `local_dir_use_symlinks` argument is deprecated and ignored in `hf_hub_download`. Downloading to a local directory does not use symlinks anymore.
  warnings.warn(


config.json:   0%|          | 0.00/607 [00:00<?, ?B/s]
config.json: 100%|██████████| 607/607 [00:00<00:00, 3.55MB/s]
An error occurred while trying to fetch stabilityai/sdxl-vae: stabilityai/sdxl-vae does not appear to have a file named diffusion_pytorch_model.fp16.safetensors.
Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead.
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 536, in hf_raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.10/site-packages/httpx/_models.py", line 829, in raise_for_status
    raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '404 Not Found' for url 'https://huggingface.co/stabilityai/sdxl-vae/resolve/6f5909a7e596173e25d4e97b07fd19cdf9611c76/diffusion_pytorch_model.fp16.bin'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/diffusers/utils/hub_utils.py", line 290, in _get_model_file
    model_file = hf_hub_download(
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1038, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1111, in _hf_hub_download_to_cache_dir
    (url_to_download, etag, commit_hash, expected_size, xet_file_data, head_call_error) = _get_metadata_or_catch_error(
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1649, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1575, in get_hf_file_metadata
    response = _httpx_follow_relative_redirects(method="HEAD", url=url, headers=hf_headers, timeout=timeout)
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 291, in _httpx_follow_relative_redirects
    hf_raise_for_status(response)
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 550, in hf_raise_for_status
    raise _format(RemoteEntryNotFoundError, message, response) from e
huggingface_hub.errors.RemoteEntryNotFoundError: 404 Client Error. (Request ID: Root=1-69091477-318d4d7e58d07f3f413dfe30;65f35add-57a6-4134-91e1-cca3ef0c5cdd)

Entry Not Found for url: https://huggingface.co/stabilityai/sdxl-vae/resolve/6f5909a7e596173e25d4e97b07fd19cdf9611c76/diffusion_pytorch_model.fp16.bin.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/app.py", line 2, in <module>
    import models
  File "/app/models.py", line 97, in <module>
    load_and_compile_models()
  File "/app/models.py", line 38, in load_and_compile_models
    pipe_global.vae = AutoencoderKL.from_pretrained(
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 1214, in from_pretrained
    resolved_model_file = _get_model_file(
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/diffusers/utils/hub_utils.py", line 317, in _get_model_file
    raise EnvironmentError(
OSError: stabilityai/sdxl-vae does not appear to have a file named diffusion_pytorch_model.fp16.bin.
/usr/local/lib/python3.10/site-packages/torch/amp/autocast_mode.py:266: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn(
🚀 Starting model loading and compilation...
Loading SDXL base model: stabilityai/stable-diffusion-xl-base-1.0


model_index.json:   0%|          | 0.00/609 [00:00<?, ?B/s]
model_index.json: 100%|██████████| 609/609 [00:00<00:00, 2.26MB/s]


scheduler_config.json:   0%|          | 0.00/479 [00:00<?, ?B/s]
scheduler_config.json: 100%|██████████| 479/479 [00:00<00:00, 2.20MB/s]


config.json:   0%|          | 0.00/565 [00:00<?, ?B/s]
config.json: 100%|██████████| 565/565 [00:00<00:00, 2.68MB/s]


text_encoder/model.fp16.safetensors:   0%|          | 0.00/246M [00:00<?, ?B/s]

text_encoder/model.fp16.safetensors: 100%|██████████| 246M/246M [00:01<00:00, 244MB/s]
text_encoder/model.fp16.safetensors: 100%|██████████| 246M/246M [00:01<00:00, 244MB/s]


config.json:   0%|          | 0.00/575 [00:00<?, ?B/s]
config.json: 100%|██████████| 575/575 [00:00<00:00, 3.10MB/s]


text_encoder_2/model.fp16.safetensors:   0%|          | 0.00/1.39G [00:00<?, ?B/s]

text_encoder_2/model.fp16.safetensors:  10%|▉         | 134M/1.39G [00:01<00:09, 133MB/s]
text_encoder_2/model.fp16.safetensors: 100%|██████████| 1.39G/1.39G [00:01<00:00, 706MB/s]


merges.txt:   0%|          | 0.00/525k [00:00<?, ?B/s]
merges.txt: 100%|██████████| 525k/525k [00:00<00:00, 43.5MB/s]


special_tokens_map.json:   0%|          | 0.00/472 [00:00<?, ?B/s]
special_tokens_map.json: 100%|██████████| 472/472 [00:00<00:00, 2.60MB/s]


tokenizer_config.json:   0%|          | 0.00/737 [00:00<?, ?B/s]
tokenizer_config.json: 100%|██████████| 737/737 [00:00<00:00, 4.53MB/s]


vocab.json:   0%|          | 0.00/1.06M [00:00<?, ?B/s]
vocab.json: 100%|██████████| 1.06M/1.06M [00:00<00:00, 16.8MB/s]


special_tokens_map.json:   0%|          | 0.00/460 [00:00<?, ?B/s]
special_tokens_map.json: 100%|██████████| 460/460 [00:00<00:00, 1.84MB/s]


tokenizer_config.json:   0%|          | 0.00/725 [00:00<?, ?B/s]
tokenizer_config.json: 100%|██████████| 725/725 [00:00<00:00, 3.87MB/s]


config.json:   0%|          | 0.00/1.68k [00:00<?, ?B/s]
config.json: 100%|██████████| 1.68k/1.68k [00:00<00:00, 4.27MB/s]


unet/diffusion_pytorch_model.fp16.safete(…):   0%|          | 0.00/5.14G [00:00<?, ?B/s]
1 Like

BTW, I just committed the fix, but your code seems built specifically for Zero GPU, so it’ll throw errors in CPU space. Given the model size, it probably won’t run in CPU space either…