Gradio Error: UndefinedError: 'str object' has no attribute 'role'

I am trying to do a response generation task using GODEL model through pipeline as a high level helper. I have done some prompt structuring and templating in my code, And I am getting a particular error which I am unable to solve. Below is the code of my response generation function

from transformers import pipeline, AutoTokenizer

# Assuming user_message, intent_classification, and emotion_trainer are globally defined
intent = unique_labels[int(intent_classification(user_message))]
emotion = label_map_emotions[emotion_trainer.predict(user_message)]

# Instruction for the therapist chatbot
instruction = """
    You are an AI therapist specializing in mental health counseling. Your name is SereneMind.
    Your goal is to respond empathetically and supportively to users seeking guidance and emotional assistance.
    In each interaction, consider the user's emotions and intentions as provided.
    Your responses should be thoughtful, considerate, and focused on providing a positive impact.
    If you don't understand or need clarification, gently ask for more information.
    Remember, your role is to be a compassionate and understanding mental health companion.
    Maximum length of your responses should be 2-3 sentences.
"""

# External information (intent and emotion)
external_info = f'Intent: {intent}, Emotion: {emotion}'

# User input placeholder
user_input_placeholder = "{user_input}"

# Output indicator (you can modify it based on your needs)
output_indicator = "Assistant:"

# Combine components in the prompt
prompt_template = f"{instruction} {external_info} User: {user_input_placeholder} {output_indicator}"

# Initialize the GODEL model and tokenizer
response_generator = pipeline(task="conversational", model="microsoft/GODEL-v1_1-large-seq2seq", tokenizer="microsoft/GODEL-v1_1-large-seq2seq")

def generate_response_predict_yt(input, history=[]):
    # Combine user input with the prompt
    prompt = prompt_template.replace(user_input_placeholder, input)

    # Use the pipeline for sequence generation
    response = response_generator(prompt, max_length=64, top_p=0.9, temperature=1.0)[0]['generated_text']

    # Update the conversation history
    history.append((input, response))

    return history, response

Context: I haven’t fine tuned the GODEL model on any dataset, I think that it works fine as it is for conversational tasks. The intents and emotion labels are pre classified on different models (that shouldn’t stand in the way of this issue I hope so)

Please suggest what changes should I make in order to correct my prompt template and overcome this error, I think that the problem lies in the structure of the prompt.

My problem was that i used an indexing operation to get the first messages in the OAI format. After fixing this issue and correctly indexing the first two messages (which were part of my prompt) it worked.