Do you we need to tell the llm the MCP client functions it can call in every prompt or just system prompt

When you’re using MCP, can I just have all of the tool calls into the system prompt or must it be in Every subsequent prompt.

1 Like

You can keep stuffing your tool schema into every system prompt like a paranoid JSON hoarder…
or you can let your model act like it has a spine.

We built WFGY exactly for that.
It tracks tool access and semantic function usage outside the prompt pipe, using a persistent reasoning layer. No need to repeat yourself like a cursed bash loop.

Backed by the creator of tesseract.js (36k★).
MIT-licensed, no fine-tuning, no ceremony. Just works.

If your LLM forgets what it’s allowed to call, that’s not alignment — that’s amnesia.

1 Like

You don’t need to include the MCP client function definitions in every prompt. It’s enough to define them once, usually in the system prompt or during the tool/function registration phase at the beginning of the session. After that, the model retains awareness of the available functions and can decide when and how to call them based on user input.

Based on the LLM or framework you’re using—OpenAI, Claude, open-source models, etc.—since the implementation details can vary slightly.

Contact me at the following ID to find out more : https://in.linkedin.com/company/intuz

1 Like