I am trying to run the latest Llama model in a space. Is there any way to successfully do this?
Exit code: 1. Reason: Fetching model from: https://huggingface.co/meta-llama/Llama-3.2-90B-Vision-Instruct
Traceback (most recent call last):
File "/home/user/app/app.py", line 3, in <module>
gr.load("models/meta-llama/Llama-3.2-90B-Vision-Instruct").launch()
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/external.py", line 75, in load
return load_blocks_from_huggingface(
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/external.py", line 109, in load_blocks_from_huggingface
blocks: gradio.Blocks = factory_methods[src](name, hf_token, alias, **kwargs)
File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/gradio/external.py", line 367, in from_model
raise ValueError(f"Unsupported pipeline type: {p}")
ValueError: Unsupported pipeline type: image-text-to-text
1 Like
Maybe it needs:
pip install -U huggingface_hub
opened 07:26AM - 27 Sep 24 UTC
enhancement
### Describe the bug
[/usr/local/lib/python3.10/dist-packages/gradio/external.p⦠y](https://localhost:8080/#) in from_model(model_name, hf_token, alias, **kwargs)
368 fn = client.image_to_image
369 else:
--> 370 raise ValueError(f"Unsupported pipeline type: {p}")
371
372 def query_huggingface_inference_endpoints(*data):
ValueError: Unsupported pipeline type: image-text-to-text
### Have you searched existing issues? π
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
gr.load("models/meta-llama/Llama-3.2-90B-Vision-Instruct").launch()

```
### Screenshot
https://github.com/user-attachments/assets/28470dd0-fdf9-44f1-80b7-5ea611772e57
### Logs
```shell
Fetching model from: https://huggingface.co/meta-llama/Llama-3.2-90B-Vision-Instruct
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-10-8dcbad077325> in <cell line: 3>()
1 import gradio as gr
2
----> 3 gr.load("models/meta-llama/Llama-3.2-90B-Vision-Instruct").launch(debug=True)
2 frames
/usr/local/lib/python3.10/dist-packages/gradio/external.py in from_model(model_name, hf_token, alias, **kwargs)
368 fn = client.image_to_image
369 else:
--> 370 raise ValueError(f"Unsupported pipeline type: {p}")
371
372 def query_huggingface_inference_endpoints(*data):
ValueError: Unsupported pipeline type: image-text-to-text
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 4.44.0
gradio_client version: 1.3.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 3.7.1
fastapi: 0.115.0
ffmpy: 0.4.0
gradio-client==1.3.0 is not installed.
httpx: 0.27.2
huggingface-hub: 0.24.6
importlib-resources: 6.4.5
jinja2: 3.1.4
markupsafe: 2.1.5
matplotlib: 3.7.1
numpy: 1.26.4
orjson: 3.10.7
packaging: 24.1
pandas: 2.1.4
pillow: 9.4.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart: 0.0.9
pyyaml: 6.0.2
ruff: 0.6.5
semantic-version: 2.10.0
tomlkit==0.12.0 is not installed.
typer: 0.12.5
typing-extensions: 4.12.2
urllib3: 2.0.7
uvicorn: 0.30.6
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.6.1
httpx: 0.27.2
huggingface-hub: 0.24.6
packaging: 24.1
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
Blocking usage of gradio
jimbobu
December 4, 2024, 10:56am
3
Good idea. I tried installing huggingface_hub==0.26.3 but it still has the same error when I restart my space.
1 Like
This must be a Gradio bug or not implemented.
Container logs:
===== Application Startup at 2024-12-04 11:20:52 =====
Package Version
------------------ -----------
aiofiles 23.2.1
aiohappyeyeballs 2.4.4
aiohttp 3.11.9
aiosignal 1.3.1
annotated-types 0.7.0
anyio 4.6.2.post1
async-timeout 5.0.1
attrs 24.2.0
Authlib 1.3.2
certifi 2024.8.30
cffi 1.17.1
charset-normalizer 3.4.0
click 8.0.4
cryptography 44.0.0
datasets 3.1.0
dill 0.3.8
exceptiongroup 1.2.2
fastapi 0.115.6
ffmpy 0.4.0
filelock 3.16.1
frozenlist 1.5.0
fsspec 2024.9.0
gradio 5.7.1
gradio_client 1.5.0
h11 0.14.0
hf_transfer 0.1.8
httpcore 1.0.7
httpx 0.28.0
huggingface-hub 0.26.3
idna 3.10
itsdangerous 2.2.0
Jinja2 3.1.4
markdown-it-py 3.0.0
MarkupSafe 2.1.5
mdurl 0.1.2
multidict 6.1.0
multiprocess 0.70.16
numpy 2.1.3
orjson 3.10.12
packaging 24.2
pandas 2.2.3
pillow 11.0.0
pip 22.3.1
propcache 0.2.1
protobuf 3.20.3
psutil 5.9.8
pyarrow 18.1.0
pycparser 2.22
pydantic 2.10.3
pydantic_core 2.27.1
pydub 0.25.1
Pygments 2.18.0
python-dateutil 2.9.0.post0
python-multipart 0.0.12
pytz 2024.2
PyYAML 6.0.2
requests 2.32.3
rich 13.9.4
ruff 0.8.1
safehttpx 0.1.6
semantic-version 2.10.0
setuptools 65.5.1
shellingham 1.5.4
six 1.16.0
sniffio 1.3.1
spaces 0.30.4
starlette 0.41.3
tomlkit 0.12.0
tqdm 4.67.1
typer 0.15.0
typing_extensions 4.12.2
tzdata 2024.2
urllib3 2.2.3
uvicorn 0.32.1
websockets 12.0
wheel 0.45.1
xxhash 3.5.0
yarl 1.18.3
[notice] A new release of pip available: 22.3.1 -> 24.3.1
[notice] To update, run: pip install --upgrade pip
Fetching model from: https://huggingface.co/unsloth/Llama-3.2-11B-Vision-Instruct
Traceback (most recent call last):
File "/home/user/app/app.py", line 5, in <module>
gr.load("unsloth/Llama-3.2-11B-Vision-Instruct", src="models", hf_token=os.environ.get("HF_TOKEN"), examples=None).launch()
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 85, in load
return load_blocks_from_huggingface(
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 161, in load_blocks_from_huggingface
blocks: gradio.Blocks = factory_methods[src](name, hf_token, alias, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 419, in from_model
raise ValueError(f"Unsupported pipeline type: {p}")
ValueError: Unsupported pipeline type: image-text-to-text
import os
import gradio as gr
import subprocess
subprocess.run("pip list", shell=True)
gr.load("unsloth/Llama-3.2-11B-Vision-Instruct", src="models", hf_token=os.environ.get("HF_TOKEN"), examples=None).launch()
Edit:
There is no branch, so itβs not implemented.
]
outputs = components.Image(label="Output")
examples = [
[
"https://gradio-builds.s3.amazonaws.com/demo-files/cheetah-002.jpg",
"Photo of a cheetah with green eyes",
]
]
fn = client.image_to_image
else:
raise ValueError(f"Unsupported pipeline type: {p}")
def query_huggingface_inference_endpoints(*data):
if preprocess is not None:
data = preprocess(*data)
try:
data = fn(*data) # type: ignore
except huggingface_hub.utils.HfHubHTTPError as e:
if "429" in str(e):
raise TooManyRequestsError() from e
if postprocess is not None:
1 Like