Hi, I am building a RAG app for my codebase. I want my RAG to differentiate between queries that need context and those that don’t.
Basic prompts like “Hi” or “Who are you?” can be answered without context from the codebase. However, due to the presence of context, the RAG is not handling these prompts correctly.
Currently, I am trying to differentiate between these queries using Llama 3.2, but it is producing false positives (indentifying queries that needs context as BASIC QUERY) . Can someone suggest a better approach for this?
def generate_clarification_question(query, retrieved_docs, previous_conversation_context=""):
"""
Uses Llama 3.2 to determine if additional context is required.
If the query is basic (e.g., greetings or simple identity questions), it returns "NO_CLARIFICATION_NEEDED".
Otherwise, it generates a refined query for more specific backend retrieval.
"""
prompt = f"""
User Query:
"{query}"
Instructions:
1. If the user's query is a simple greeting (e.g., "Hi", "Hello") or a basic identity inquiry (e.g., "Who are you?", "Who am I?"), respond with exactly: "BASIC_QUERY_DETECTED".
2. Any other query requires further clarification.
3. Generate a refined query that can be used to fetch better results from the database index.
4. The clarification question should be based on the current context and user query, incorporating technical terms and relevant repository names.
Provide only the necessary output.
"""
clarification = query_ollama(prompt, light_model)
return clarification.strip()