Simple LangChain Code in Gradio

Dear community,

I had some working code and then it stopped working recently.
I don’t know what caused it but it was initially working in Google Colab.

It basically uses Gradio Chatbot UI and passes queries to OpenAI using LangChain.

I used a global variable called chat_history to hold the conversation. It was working fine but all of sudden now it is a problem.

I can only ask the first question and always get the same error on the second.

ValueError: Unsupported chat history format: <class ‘list’>. Full chat history: [[‘Hi’, ‘Hello! How can I assist you today?’]]

It seems don’t like when I pass the previous history back to the variable and query the model on the second question.

What did I do wrong?

# Initialise Langchain - Conversation Retrieval Chain
qa = ConversationalRetrievalChain.from_llm(ChatOpenAI(temperature=0), vectorstore.as_retriever())

# Front end web app
import gradio as gr

# Define an empty chat_history list
chat_history = []

with gr.Blocks() as demo:
    chatbot = gr.Chatbot()
    msg = gr.Textbox()
    clear = gr.Button("Clear")
    
    def user(user_message, chat_history):
        # Get response from QA chain
        response = qa({"question": user_message, "chat_history": chat_history})
        # Append user message and response to chat history
        chat_history = [(user_message, response["answer"])]
        return gr.update(value=""), chat_history
    
    msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False)
    clear.click(lambda: None, None, chatbot, queue=False)

if __name__ == "__main__":
    demo.launch(debug=True)

Thanks,

I have worked it out.

For some reason, the qa chain has changed not long after I made my video, not accepting the list input anymore.

Here is the video proof that it used to work:
https://www.youtube.com/watch?v=QpAZaomAc_A&t=1098s

It also introdued a memory object to give the history.
But you still want to pass in the history directly like I did in the code, now you need to convert the list object to tuple.

I don’t how true that it but proven to be working. As least this is what Bard has told me.

Here is the updated code. The only difference is to convert the list to tuple and pass the tuple to qa chain.

        # Convert chat history to list of tuples
        chat_history_tuples = []
        for message in chat_history:
            chat_history_tuples.append((message[0], message[1]))
        
        # Get result from QA chain
        result = qa({"question": query, "chat_history": chat_history_tuples})