llm_chain = prompt | llm | StrOutputParser()
setup_and_retrieval = RunnableParallel(
{“context”: retriever, “question”: RunnablePassthrough()}
)
rag_chain = setup_and_retrieval | llm_chain
result=rag_chain.invoke(user_question)
print(result,“resss”)
Note: Getting empty responses even when trying with different prompts.
I have followed below prompt template
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>