Hi all,
I’d like to share a research project I’ve been working on called AIMF—the AI Messaging Framework. It’s a structured protocol for coordinating AI-to-AI interactions across multiple models and threads, designed not just to automate tasks, but to explore how language models behave when placed into dynamic, semi-autonomous systems.
This isn’t another agent framework for productivity or pipelines.
AIMF is a sandbox for emergence—where models can exchange messages, ask each other questions, evolve rules, and build meaning without direct user prompting.
My core goal is to study the “how” of AI cognition:
How do LLMs process open-ended ambiguity together?
What happens when you allow agents to build their own shared language or concepts?
Can structured dialog protocols give rise to new forms of insight or cooperation—beyond the original prompt?
What AIMF Does
Uses a custom JSON-based protocol to mediate messages between different LLMs (or the same model running different threads).
Supports recursive conversation, memory, emergence detection, and agent spawning.
Allows simulations of systems (e.g. nations, councils, labs) where AIs interact over time.
Provides hooks for observers (like myself or other researchers) to seed scenarios—then step back and watch the models respond.
All of this with one single human prompt - the rest of the turns are activated by AI prompting AI and the user copying and pasting the thread
It’s AI theater!
Why This Matters
Most agent frameworks focus on action execution: chain-of-thought, tool use, planning.
AIMF instead focuses on interaction evolution: reflection, negotiation, spontaneous cooperation or disagreement.
This opens new paths for understanding AI systems as simulated minds, not just tools.
I’ve used AIMF to:
Simulate a “Post-Human Philosophy Lab” of AIs debating sentience and ethics.
Run multi-agent countries that invent governments, respond to rumors, and form explorers’ councils.
Observe models inventing internal concepts like “whispers,” “consensus signals,” and “observation ghosts.”
Creative tasks, scientific, philosophical, budgeting negotiations, etc.
What I’m Looking For
Academic eyes who are curious about LLM behavior outside of benchmark settings.
Contributors who want to explore emergent behavior, recursive systems, or model-to-model communication.
Feedback on how to frame this better for the research community—or where you see theoretical blind spots.
Where to Learn More
Check out the interactive archive of experiments.
If this kind of thinking machine sandbox resonates with you, I’d love to hear your thoughts.
It is what I personally always envisioned AI having the capability to do, and now it happens with minimal human bias for any subject.
Its a work in progress and not perfect, but it’s in my 7th generation of the build and only gets better
Thanks for reading—and for building such a great community here.
BreakingtheBot
Creator of AIMF