Hi Team,
I’m Darshan Hiranandani, building an AI model using Google Gemini-Pro LLM to answer user queries related to my product, and I’m using the LangChain framework to invoke APIs via Agent and Tools. However, I’m looking to improve the performance of my model, particularly when it comes to providing actionable insights.
Is there a specific prompt template or method that can help the LLM better understand and respond effectively?
Also, do you think I’m using the right LLM for this task, or would it be better to switch to the Meta Llama LLM model? If I decide to switch, how can I fine-tune the model for this use case?
Any suggestions or insights would be really helpful!
Regards
Darshan Hiranandani