Hi everyone! I recently ran an experiment where I embedded a full reasoning scaffold inside a .txt
file—no code, no scripts, just structured text prompts + semantic memory logic.
To my surprise, six different models (GPT-4 o3, Gemini 2.5 Pro, Grok 3, Kimi, Perplexity, DeepSeek) all evaluated the system and gave a 100/100 score for coherence, safety, and reasoning quality.
The tool is called Blah Blah Blah Lite — you can ask it any question (from daily life to deep philosophy), and it will give you surprising, creative, and thoughtful answers.
I believe this is one of the most accessible AI tools out there:
Super easy to use
No login, no install
Free and open source
Whether you’re a developer, student, writer, or just curious—this is something I think everyone can try and benefit from.
What it does (in plain text)
- Bootable OS console in
.txt
, with stateful reasoning - Uses semantic memory trees + ΔS logic to manage jumps
- Prevents hallucination via BBCR fallback (resets logic if tension is too high)
- Supports free creative or philosophical input with traceable node logging
No external dependencies—just paste it into a model and type hello world
.
Try it here (MIT License)
GitHub: WFGY/OS/BlahBlahBlah at main · onestardao/WFGY · GitHub
File: BlahBlahBlah_Lite.txt
Would love to hear:
- Has anyone else tried structured reasoning inside
.txt
? - What edge cases or breakage can you find?
- Does this concept feel useful for memory-style agents?
Happy to share the internals or walk through the formula (ΔS, BBMC, etc.) if anyone’s curious!
TL;DR
This is not a prompt, not a jailbreak, not a fine-tuned model.
It’s a plain .txt
file that behaves like a semantic OS.
If you try it and it breaks, tell me. If it doesn’t… maybe it’s worth forking.
Scored 100/100 by six AI models—showing two here, full set on GitHub.