Hi,
I am using Langchain + different LLM models (“Llama-2-7b-chat-hf”, “Mistral-7B-Instruct-v0.2”, …)…
My purpose is to do prompt engineering and evaluate a model’s ability to answer various questions.
The challenge is that after posing almost any question, the LLM starts a self-conversation, asking itself questions as a “Human” and then answering them as an “AI.”
What am I missing?
How can I ensure the model only responds to the original question without engaging in a self-conversation?
Using conversation via LangChain:
conversation = ConversationChain(
llm=self.llm,
memory=ConversationBufferMemory(),
verbose=False
)
print("*************************** chat_conversation ***************************")
while True:
user_input = input("> ")
ai_response = conversation.predict(input=user_input)
print("\nAssistant:\n", ai_response)
print("------------------------------------------------------------------------------------------------------\n\n\n")