Using the prompt to switch model

Hello,

I am new to gradio and tried to implement a way to switch models by identifying keyword inside the prompt

see

the problem seems to be that whatever is selected
it is ignored by gradio

Can someone explain, what is wrong in my code
and if this is feasible or not?

Thanks

Hi @blogclif, you can’t use gr.Interface.load() with a fn, you can only load a single model at a time. However, it is possible to do what you have in mind by noting the fact that an a model loaded with gr.Interface.load() can be used as a function. So you could do something like this:

models = [
 gr.Interface.load("model1name"),
 gr.Interface.load("model2name")
 ...
]

def prediction(model_choice, input):
  model = models[model_choice]
  output = model(input)
  return output = model

gr.Interface(fn=prediciton, inputs=[gr.Dropdown(["model1name", "model2name"], type="index"), gr.Textbox()], outputs=gr.Image()].launch()

This is just a code skeleton but let me know if it makes sense!

2 Likes

Thanks a lot for looking into it

I will try, I like the fact that the prompt was including the model in a text format ; that seemed really transparent, but having to select the model via a menu list is surely better, and more reliable

thank a lot

hi, this is a cool example using this approach app.py · Omnibus/maximum_diffusion at a451648ba3f1b4e498072f0e4a462d4c265fe713 from this project Maximum Diffusion - a Hugging Face Space by Omnibus currently down but I’ve ping @Omnibus to look at the error.

2 Likes

Thanks a lot for the help, I am writing some more code
yesterday and today
let’s see what will work best.

I will most likely go in the direction of a dropdown menu ; as I understand now that extracting the model from the prompt is not accepted by gradio, because gradio needs two variables, and won’t accept the second variable after it has been called.

I guess, that was the misunderstanding I had at the very beginning

2 Likes

@Omnibus

I did some more testing and found another user who had the interface I was looking for

there is a new problem now

This error is popping up

OSError: We couldn’t connect to ‘https://huggingface.co’ to load this model, couldn’t find it in the cached files and it looks like models/ItsJayQz/Marvel_WhatIf_Diffusion is not the path to a directory containing a model_index.json file.

for all the models

1 Like

will try, here was the link to the other space that had the interface I was trying to have in the first place.

I left a comment in his discussion thread

Feels like your solution above is working

removing the “models/” and using get_default_dtype

now in the log I see

Downloading: 71%|███████ | 2.47G/3.46G [00:35<00:18, 54.4MB/s]

it still shows an ‘Error’ in the end

and the logs are still scrolling … I can’t really get an error this time

This is what I have:

Cannot initialize model with low cpu memory usage because accelerate was not found in the environment. Defaulting to low_cpu_mem_usage=False. It is strongly recommended to install accelerate for faster and less memory-intense model loading. You can do so with:

pip install accelerate

.
Traceback (most recent call last):
File “/home/user/.local/lib/python3.8/site-packages/gradio/routes.py”, line 321, in run_predict
output = await app.blocks.process_api(
File “/home/user/.local/lib/python3.8/site-packages/gradio/blocks.py”, line 1015, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File “/home/user/.local/lib/python3.8/site-packages/gradio/blocks.py”, line 856, in call_function
prediction = await anyio.to_thread.run_sync(
File “/home/user/.local/lib/python3.8/site-packages/anyio/to_thread.py”, line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File “/home/user/.local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py”, line 937, in run_sync_in_worker_thread
return await future
File “/home/user/.local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py”, line 867, in run
result = context.run(func, *args)
File “app.py”, line 18, in TextToImage
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.get_default_dtype)
File “/home/user/.local/lib/python3.8/site-packages/diffusers/pipeline_utils.py”, line 708, in from_pretrained
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File “/home/user/.local/lib/python3.8/site-packages/transformers/modeling_utils.py”, line 2325, in from_pretrained
dtype_orig = cls._set_default_torch_dtype(torch_dtype)
File “/home/user/.local/lib/python3.8/site-packages/transformers/modeling_utils.py”, line 1102, in _set_default_torch_dtype
if not dtype.is_floating_point:
AttributeError: ‘builtin_function_or_method’ object has no attribute ‘is_floating_point’
Traceback (most recent call last):
File “/home/user/.local/lib/python3.8/site-packages/gradio/routes.py”, line 321, in run_predict
output = await app.blocks.process_api(
File “/home/user/.local/lib/python3.8/site-packages/gradio/blocks.py”, line 1013, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File “/home/user/.local/lib/python3.8/site-packages/gradio/blocks.py”, line 923, in preprocess_data
processed_input.append(block.preprocess(inputs[i]))
IndexError: list index out of range
Traceback (most recent call last):
File “/home/user/.local/lib/python3.8/site-packages/diffusers/configuration_utils.py”, line 326, in load_config
config_file = hf_hub_download(
File “/home/user/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py”, line 114, in _inner_fn
validate_repo_id(arg_value)
File “/home/user/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py”, line 172, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils.validators.HFValidationError: Repo id must use alphanumeric chars or ‘-’, '', ‘.’, ‘–’ and ‘…’ are forbidden, ‘-’ and ‘.’ cannot start or end the name, max length is 96: ‘’.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/home/user/.local/lib/python3.8/site-packages/gradio/routes.py”, line 321, in run_predict
output = await app.blocks.process_api(
File “/home/user/.local/lib/python3.8/site-packages/gradio/blocks.py”, line 1015, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File “/home/user/.local/lib/python3.8/site-packages/gradio/blocks.py”, line 856, in call_function
prediction = await anyio.to_thread.run_sync(
File “/home/user/.local/lib/python3.8/site-packages/anyio/to_thread.py”, line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File “/home/user/.local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py”, line 937, in run_sync_in_worker_thread
return await future
File “/home/user/.local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py”, line 867, in run
result = context.run(func, *args)
File “app.py”, line 18, in TextToImage
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.get_default_dtype)
File “/home/user/.local/lib/python3.8/site-packages/diffusers/pipeline_utils.py”, line 459, in from_pretrained
config_dict = cls.load_config(
File “/home/user/.local/lib/python3.8/site-packages/diffusers/configuration_utils.py”, line 363, in load_config
raise EnvironmentError(
OSError: We couldn’t connect to ‘https://huggingface.co’ to load this model, couldn’t find it in the cached files and it looks like is not the path to a directory containing a model_index.json file.
Checkout your internet connection or see how to run the library in offline mode at ‘Installation’.


To get this error I left the model as ‘blank’

@Omnibus
Oh well, this time the generating part never ends
there are no error it just won’t finish

I give up

Thanks again, for your assistance,

I let it run for 3 hours and just closed the tab (after taking this screenshot)

the logs are showing a lot of downloads and sometime I see this

Fetching 16 files: 100%|██████████| 16/16 [02:48<00:00, 10.53s/it]
/home/user/.local/lib/python3.8/site-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn(

if this would load the models once and for all?
that could really speed up the process

Thanks a lot, I will use this new code above
and the solution provided by @abidlabs

Hopefully, this will work faster

1 Like

looks promising, I am not having the version working yet
I made the space private in the meantime to get faster result

Will make the space public once I’m done

1 Like

I am finishing up, but it works now

Thanks a lot for your help, I am glad this is resolved
and I learned a lot in the last 2 days thanks to you

Here is the final version:
https://alstable-marvel.hf.space

As a sidenote - I had to get rid of one the style

Fetching model from: DGSpitzer/Guan-Yu-Diffusion · Hugging Face
as it was giving an error
ValueError: Unsupported pipeline type: None

This thread can now be closed

1 Like

here is a fresh example

1 Like