Hi everyone,
I’m an independent researcher, and I’ve been working on something a bit unconventional — a cognitive architecture that uses equations instead of vectors as its core representation.
Abstract
We present TENET (Temporal Equation Network for Embodied Theory), a design philosophy and proof-of-concept for a cognitive architecture that replaces the vector with the equation as the fundamental representational primitive. Starting from the observation that human language exhibits both systematicity and open-ended abstraction—and that current deep learning architectures fail to account for this duality—we develop a philosophical framework grounded in human perception, the induction-deduction duality, and the epistemology of scientific discovery. From this framework, we derive 16 design principles that collectively specify an architecture in which: spatial equations define the boundaries of world structure; time mediates both physical interaction and epistemological verification; knowledge is acquired through mapping boundary expansion guided by residual geometry; perception operates as top-down prediction with residual-driven correction; and a four-layer rule hierarchy (from cognitive meta-constraints to candidate hypotheses) organizes all knowledge with temporal crystallization dynamics. A proof-of-concept implementation, running entirely on CPU without neural networks or gradient descent, validates the core methodology. We introduce Cross-Modal Symbolic Regression (CMSR)—a new problem class in which the system discovers bridge equations across sensory modalities—and demonstrate perfect recovery (R² = 1.0) across five Feynman-class physics domains. A pendulum dynamics experiment further validates the system’s ability to autonomously deepen its equation set when confronted with nonlinear phenomena, producing interpretable Why-signals that correspond to mapping boundary depth transitions.
The basic argument
Every major DL architecture assumes information = high-dimensional vectors. But a 768-dim embedding of “table” is at the same abstraction level as the word itself — both are descriptions. An equation like n·x = d is structurally different: it doesn’t describe one instance, it defines the boundary of all possible instances. This distinction matters for deduction, grounding, and interpretability.
A key idea in the paper: the world’s structure — the manifold — exists independently of any observer. Learning isn’t about constructing or unfolding the manifold. It’s about expanding the system’s mapping boundary into it. Each new equation extends the system’s reach, making a previously inaccessible region of the manifold operable. Residuals (the mismatch between prediction and observation) act as pressure signals on this boundary, pointing the direction of expansion.
What the proof-of-concept actually does (CPU-only, no neural nets)
-
Cross-Modal Symbolic Regression (CMSR): a new problem class where the system discovers bridge equations across sensory modalities, not within a single variable set. R² = 1.0 across five Feynman-class physics domains (Kepler’s law, ideal gas, Ohm’s law, wave equation, relativistic energy) — same method, no domain tuning.
-
Autonomous equation deepening: system starts with linear pendulum model, gets confronted with large-amplitude data, discovers it needs sin(θ), and generates an interpretable “Why-signal” for the transition.
-
Residual-driven correction: instead of replacing old knowledge, the system discovers a correction term that combines with what it already knows to produce the right answer. No catastrophic forgetting by construction.
-
Gradient descent in equation structure space: converges from uniform initialization to sin(θ) at 95.3% confidence, extracts a symbolic equation, then discards the gradient machinery.
The broader framework
The paper also develops a philosophical framework covering predictive perception, a four-layer knowledge hierarchy with temporal crystallization, and an account of language as lossy projection from equation space. 16 design principles total. It’s a design philosophy + proof-of-concept, not a claim to have solved anything big — the honest assessment section lists everything not yet demonstrated.
Full Paper (Zenodo): TENET: TEMPORAL EQUATION NETWORK FOR EMBODIED THEORY
Looking for arXiv endorsement
Full transparency: I’m currently looking for an arXiv endorsement in cs.AI (or cs.LG). As an independent researcher without university affiliation, I don’t have a natural path to get one. If you’ve read through the paper (or even skimmed the experiments) and think it meets the bar, here’s the endorsement link: https://arxiv.org/auth/endorse?x=LQGN4V
Of course, I’m also genuinely interested in feedback on the work itself — the philosophy, the experiments, the gaps. All criticism welcome.
Thanks for reading !