Default value for gradio dropdown getting set to None in TabbedInterface

classify_interface = gr.Interface(
    inputs=gr.Image(type="pil", label="Image"),
    outputs=[gr.Label(num_top_classes=3, label="Predictions"),
              gr.Number(label="Prediction time (secs)")],

attention_interface = gr.Interface(
    inputs=[gr.Image(type="pil", label="Image"),
            gr.Dropdown(choices=["1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12"], 
                        label="Attention Layer", value="6")],
    outputs=gr.Gallery(value=attention_maps, label="Attention Maps").style(grid=(3, 4)),

demo = gr.TabbedInterface([classify_interface, attention_interface], 
                          ["Identify Disease", "Visualize Attention Map"], 
                          title="NatureAI Diagnostics🧑🩺")

if __name__ == "__main__":

So basically there are two tabs, on one the user can see the predictions for the uploaded image, on the other, depending on which encoder layer the user chooses from the dropdown, it will return a grid of images displaying the attention weights of the transformer network for that particular layer.

The main issue is this:

The final error message basically says:

File "", line 88, in plot_attention
    with nopdb.capture_call(vision_transformer.blocks[int(layer_num)-1].attn.forward) as attn_call:
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

(Unable to post the full screenshot since new users can only post 1 media attachment)

Basically the default value for my dropdown that I set using value="6" gets replaced with None Type. This is the plot_attention function:

def plot_attention(image, layer_num):
    """Given an input image, plot the average attention weight given to each image patch by each attention head."""
    attention_map_outputs = []
    input_data = data_transforms(image)
    with nopdb.capture_call(vision_transformer.blocks[int(layer_num)-1].attn.forward) as attn_call:
    attn = attn_call.locals['attn'][0]
    with torch.inference_mode():
        # loop over attention heads
        for h_weights in attn:
            h_weights = h_weights.mean(axis=-2)  # average over all attention keys
            h_weights = h_weights[1:]  # skip the [class] token
            output_img = plot_weights(input_data, h_weights)
    return attention_map_outputs

attention_maps = plot_attention(random_image, "6")

I used print(image.shape) | print(layer_num) at the start of the function to see if it was actually even receiving any input when I called the function for the first time in attention_maps = plot_attention(random_image, "6"), turns out it was. But when it gets to the interface section, the default value gets set to None.

When I try to launch it locally or on Colab, everything works fine. But the weird thing is even when I manually set value=None on my local machine or on Colab, for some reason the app still works fine? I’m really new to Gradio so I’m sincerely sorry if this is an obvious mistake which I’m unable to see, I just wanted to try something out as a pet project and would really appreciate some help. Thank you.

Bottom half of the error message:

Update: I replaced the dropdown menu with a slider min_value=1 | max_value=12 | steps=1 | default_value=6 but the issue still persists (same error message as the attached screenshots). Obviously I can’t replace the input with a text box since then I will have no control over what the user inputs (and the input has to be between 1-12).

Upon reading through the gradio docs, apparently if value in gradio.Dropdown or gradio.Slider is callable, then the function will be called whenever the app loads to set the initial value of the component. I guess this is where the default value keeps getting replaced by a None. Still can’t seem to figure out why that is happening though since the app runs fine on Colab or on my local machine.

For now, I have disabled selecting the transformer encoder layer number and the app works fine on huggingface spaces, kind of a bummer since I really wanted to add that functionality. If anyone has any suggestions then please let me know. Thank you.

Thank you for your time, I found the thread you were talking about. Will definitely give it a spin first thing in the morning. Appreciate the help.

1 Like

Apparently what fixed the issue was setting type="index" in gr.Dropdown() + setting a default value in the function itself, like: def plot_attention(image, num_encoder_layer=5). If I don’t do that and instead solely rely on the value and type parameters to do the job, then it just ends up showing the same error message that I posted above.

1 Like