Could recursive “binary-with-magnitude” encodings (squares vs pronics) inspire new LLM memory structures?
Hi everyone,
I’ve been experimenting with a recursive integer sequence I call the Nitya Sequence (Nitya = “eternal” in Sanskrit). It alternates deterministically between perfect squares and pronic numbers in a recursive fashion.
Definition (recursive form):
Start from a square, e.g. 4 = 2².
Recurrence:
a(1) = 1, \quad a(n+1) = a(n) + \lceil \sqrt{a(n)} \rceil.
Example terms:
1,2,4, 6, 9, 12, 16, 20, 25, 30, 36, 42, 49, 56, 64, 72, 81, 90, 100, 110, 121, 132, 144, 156, 169, \dots
Odd indices → perfect squares.
Even indices → pronics.
The sequence can start at any square, so there are infinitely many variants.
Why this caught my attention for LLMs
If we encode squares as “1” and pronics as “0”, the sequence becomes a kind of binary encoding with magnitude and context:
A “1” (square) is not just a bit, but carries positional/magnitude information.
Each “0” (pronic) exists only relative to its neighboring squares, so the 0 has contextual dependence.
In other words: 1’s and 0’s store information about each other recursively, not just independently.
This made me wonder:
Could such recursive binary-with-magnitude encodings be useful in LLM architectures (e.g. for contextual embeddings, memory compression, or retrieval mechanisms)?
Might there be analogies in semiconductors (squares = stable lattice states, pronics = transitions) or prime factorization methods, where interleaving carries hidden structure?
In LLMs specifically: could attention/memory layers benefit from such a deterministic recursive binary encoding that naturally preserves context between tokens?
Questions for the community
-
Are there known information encoding schemes in AI/ML that resemble this recursive alternation (binary with contextual dependence)?
-
Could a recursive definition like this be tested as a memory initialization or embedding layer in LLMs?
-
Do you know of existing work connecting integer sequences to architectural designs in neural networks?
I’d love to hear thoughts, whether this is just an interesting mathematical curiosity or if it could inspire new directions in LLM memory design or representation learning.
Additional note (descending variant):
There is also a descending version of the Nitya Sequence: starting from a square (e.g. 100 = 10²) and recursively subtracting .
Example:
100, 90, 81, 72, 64, 56, 49, 42, 36, 30, 25, 20, 16, 12, 9, 6, 4, 2, 1, 0.
Could the descending recursion also have applications in encoding/compression (finite cycles, reversible processes)?
In math terms, is this just a reverse traversal of the ascending case, or does it have unique structural properties?
Thanks!
— Mahesh Babu Pendekanti
#LLM #embeddings #memory #AI-research