Hi folks! I’ve been working on a set of symbolic memory tools for local LLMs — designed to support continuity, persona evolution, and reflection under token constraints.
The repo explores:
reflections.txt
: lightweight journaling with token-aware compression- Persona scaffolding from symbolic memory traces (
persona.yaml
) - Symbolic fatigue testing via “The Gauntlet” (recursive strain simulation)
- Break/recovery loops for pacing and symbolic co-regulation
It’s designed for small-scale models (6GB VRAM), and works in local setups like text-generation-webui with GPTQ.
Repo: github.com/babibooi/symbolic-memory-loop
Full project: github.com/babibooi
Would love to hear from others working on symbolic systems, LLM pacing, or reflection scaffolds.