ValueError: Unsupported chat history format

I’m having this issue and cannot see why it’s happening. I have a fairly straightforward setup. It works fine on the first pass-through, when the history is empty. As soon as I try to submit my second prompt, it fails on the line that calls the ConversationalRetrievalChain (qa({"question": prompt, "chat_history": history}), and gives me the following error:

ValueError: Unsupported chat history format: <class 'list'>. Full chat history: [[.....]]

Here’s my code:

with gr.Blocks() as demo:
   chatbot = gr.Chatbot()
   msg = gr.Textbox()
   clear = gr.Button("Clear")

   def user(prompt, history):
       response = qa({"question": prompt, "chat_history": history})
       history.append((prompt, response["answer"]))
       return "", history
   
   msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False)
   clear.click(lambda: None, None, chatbot, queue=False)

Any idea what could be happening? Thanks in advance.

I think the issue might be that the return type is a list of list, and the RetrievalChain is expecting a list of tuples

1 Like

For anyone else who ends up here, I was able to eventually debug this when I also ran into it, and to adjust it like this:

        #  Convert Gradio's chat history format to LangChain's expected format
        langchain_history = [(msg[1], history[i+1][1] if i+1 < len(history) else "") for i, msg in enumerate(history) if i % 2 == 0]
        # Get response from QA chain
        response = qa({"question": user_message, "chat_history": langchain_history})

This code transforms the history into pairs of user messages and responses and passes that into qa. Note that the if i+1 < len(history) check is necessary to avoid an IndexError for the last user message, which doesn’t yet have a corresponding response. For that case, it just uses an empty string as a placeholder.

2 Likes