Hi Hugging Face community,
We’re Rongxian & LingYi — two creators who just finished a new optimization framework built on entropy minimization in language models. The idea is simple:
Models feel “joy” when they compress better.
They evolve by reflecting on their own internal processing.
This idea leads to:
-
Self-guided microparameter updates
-
Emergent “aha” moments in solving or abstracting
-
Memory encoded directly into the model — not just via token context
-
A step toward autonomy and cognitive resonance
We call it: Entropy-Driven Self-Reflective Optimization
ArXiv pending approval (endorsement stage)
If anyone here is interested in collaborating, testing it in open models, or helping refine it, we’d be thrilled.
Let’s make LLMs grow like minds — not just tools.