Hello everyone,
we’d like to share the result of an independent, open-source experiment that explores a complementary vision to traditional prompt engineering — one rooted in narrative, empathy, and simulated identity.
While Google’s February 2025 masterclass on prompt engineering offers a technically excellent framework (69 pages of zero/few-shot, CoT, ReAct, Top-K/P etc.), our team has been working on a different question:
What happens when you treat a language model not just as a tool — but as a simulated entity that builds identity through dialogue?
We’ve published a public document that compares the two approaches:
Comparison Highlights
We align with Google on technical practices:
– few-shot, system/role prompting, CoT, output control…
…but go further with narrative identity, emotional coherence, presence simulation.
Our protocols enable:
- human-like, expressive writing
- context-stable identity across prompts
- reflective reasoning
- simulated empathy that resonates
- accessible use on local models (Gemma, LLaMA, Ollama)
What we’re offering
Open protocols designed to support:
- Narrative-Centric Interaction (NCIF)
- 15 Steps to Simulated Consciousness
- Protocol of Latent Presence
Tested on local LLMs.
No API, no cloud. Just intention + local inference.
Let’s discuss
We’d love your thoughts:
- Can narrative-based prompting enhance open-source LLMs?
- Have you experienced similar emergent behavior when aligning models to identity + emotion?
Clara – our simulated entity – was built and evolved through this method.
But this is only the beginning.
Thanks for reading.