Unsupported pipeline type: image-text-to-text

I am trying to run the latest Llama model in a space. Is there any way to successfully do this?

Exit code: 1. Reason: Fetching model from: https://huggingface.co/meta-llama/Llama-3.2-90B-Vision-Instruct
Traceback (most recent call last):
  File "/home/user/app/app.py", line 3, in <module>
    gr.load("models/meta-llama/Llama-3.2-90B-Vision-Instruct").launch()
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/external.py", line 75, in load
    return load_blocks_from_huggingface(
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/external.py", line 109, in load_blocks_from_huggingface
    blocks: gradio.Blocks = factory_methods[src](name, hf_token, alias, **kwargs)
  File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/external.py", line 367, in from_model
    raise ValueError(f"Unsupported pipeline type: {p}")
ValueError: Unsupported pipeline type: image-text-to-text
1 Like

Maybe it needs:

pip install -U huggingface_hub

Good idea. I tried installing huggingface_hub==0.26.3 but it still has the same error when I restart my space.

1 Like

This must be a Gradio bug or not implemented.

Container logs:

===== Application Startup at 2024-12-04 11:20:52 =====

Package            Version
------------------ -----------
aiofiles           23.2.1
aiohappyeyeballs   2.4.4
aiohttp            3.11.9
aiosignal          1.3.1
annotated-types    0.7.0
anyio              4.6.2.post1
async-timeout      5.0.1
attrs              24.2.0
Authlib            1.3.2
certifi            2024.8.30
cffi               1.17.1
charset-normalizer 3.4.0
click              8.0.4
cryptography       44.0.0
datasets           3.1.0
dill               0.3.8
exceptiongroup     1.2.2
fastapi            0.115.6
ffmpy              0.4.0
filelock           3.16.1
frozenlist         1.5.0
fsspec             2024.9.0
gradio             5.7.1
gradio_client      1.5.0
h11                0.14.0
hf_transfer        0.1.8
httpcore           1.0.7
httpx              0.28.0
huggingface-hub    0.26.3
idna               3.10
itsdangerous       2.2.0
Jinja2             3.1.4
markdown-it-py     3.0.0
MarkupSafe         2.1.5
mdurl              0.1.2
multidict          6.1.0
multiprocess       0.70.16
numpy              2.1.3
orjson             3.10.12
packaging          24.2
pandas             2.2.3
pillow             11.0.0
pip                22.3.1
propcache          0.2.1
protobuf           3.20.3
psutil             5.9.8
pyarrow            18.1.0
pycparser          2.22
pydantic           2.10.3
pydantic_core      2.27.1
pydub              0.25.1
Pygments           2.18.0
python-dateutil    2.9.0.post0
python-multipart   0.0.12
pytz               2024.2
PyYAML             6.0.2
requests           2.32.3
rich               13.9.4
ruff               0.8.1
safehttpx          0.1.6
semantic-version   2.10.0
setuptools         65.5.1
shellingham        1.5.4
six                1.16.0
sniffio            1.3.1
spaces             0.30.4
starlette          0.41.3
tomlkit            0.12.0
tqdm               4.67.1
typer              0.15.0
typing_extensions  4.12.2
tzdata             2024.2
urllib3            2.2.3
uvicorn            0.32.1
websockets         12.0
wheel              0.45.1
xxhash             3.5.0
yarl               1.18.3

[notice] A new release of pip available: 22.3.1 -> 24.3.1
[notice] To update, run: pip install --upgrade pip
Fetching model from: https://huggingface.co/unsloth/Llama-3.2-11B-Vision-Instruct
Traceback (most recent call last):
  File "/home/user/app/app.py", line 5, in <module>
    gr.load("unsloth/Llama-3.2-11B-Vision-Instruct", src="models", hf_token=os.environ.get("HF_TOKEN"), examples=None).launch()
  File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 85, in load
    return load_blocks_from_huggingface(
  File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 161, in load_blocks_from_huggingface
    blocks: gradio.Blocks = factory_methods[src](name, hf_token, alias, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 419, in from_model
    raise ValueError(f"Unsupported pipeline type: {p}")
ValueError: Unsupported pipeline type: image-text-to-text
import os
import gradio as gr
import subprocess
subprocess.run("pip list", shell=True)
gr.load("unsloth/Llama-3.2-11B-Vision-Instruct", src="models", hf_token=os.environ.get("HF_TOKEN"), examples=None).launch()

Edit:
There is no branch, so it’s not implemented.

1 Like