State Handling and Live Mode in Gradio Blocks

When working with Interface I like the live Gradio mode, streaming and also state.

When working with Blocks I love the layout ccapabilities.

When trying to put two and two together to feature live mode and state accumulation in Gradio Blocks, I get confused and wonder what implementation looks like.

For example in code below I can’t seem to get it. Can anyone offer coding tips on how to use these two awesome features in Gradio Blocks properly?

Demo space here: GradioVoicetoTexttoSentiment - a Hugging Face Space by awacke1

Code which is not working for state, live and streaming…

from transformers import pipeline
import gradio as gr

asr = pipeline(“automatic-speech-recognition”, “facebook/wav2vec2-base-960h”)
classifier = pipeline(“text-classification”, “michellejieli/emotion_text_classifier”)

def transcribe(speech, state=“”):
text = asr(speech)[“text”]
state += text + " "
return text, state

def speech_to_text(speech):
text = asr(speech)[“text”]
return text

def text_to_sentiment(text):
return classifier(text)[0][“label”]

demo = gr.Blocks()
with demo:

microphone = gr.Audio(source="microphone", type="filepath")
audio_file = gr.Audio(type="filepath")
text = gr.Textbox()
label = gr.Label()

b0 = gr.Button("Speech From Microphone")
b1 = gr.Button("Recognize Speech")
b2 = gr.Button("Classify Sentiment")

#b0.click(transcribe, inputs=[microphone, "state"], outputs=[text, "state"], live=True)
b0.click(transcribe, inputs=[microphone], outputs=[text])
b1.click(speech_to_text, inputs=audio_file, outputs=text)
b2.click(text_to_sentiment, inputs=text, outputs=label)

gr.Markdown("""References:
1. ASR Model: https://huggingface.co/facebook/wav2vec2-base-960h
2. Sentiment: https://huggingface.co/michellejieli/emotion_text_classifier
3. ASR Lesson: https://gradio.app/real-time-speech-recognition/
4. State: https://gradio.app/interface-state/
5. Deepspeech: https://deepspeech.readthedocs.io/en/r0.9/
""")

demo.launch()

Any help or tips would be greatly appreciated!

–Aaron

1 Like

I used the change event to trigger the predict function. That means that whenever the user changes the Audio Input, The prediction will occur automaticaly.

with gr.Blocks() as demo:
    gr.Markdown(description)
    chatbot = gr.Chatbot()
    state = gr.State([])

    with gr.Row():
        audio_file = gr.Audio(label="Audio", source="microphone", type="filepath")

    audio_file.change(predict, [audio_file, state], [chatbot, state])

demo.launch()
2 Likes