This may help, it covers symbolics and recursive context engineering.
in the past we need to research how hieroglyphs were used for communication in different civilizations. if we can compress all weights as hieroglyphs and interpret spaces as hieroglyphs, we would obviously gain efficiency. we would, but we need to carefully calculate the workload spent on decoding hieroglyphs or reading barcodes. it seems like a paradox to me. i think that when we consider the workload we would spend to compute hieroglyphs, we would arrive at the same result.
What workload? the data is within the glyph. No compute is required. The 6x6 bitmapped image in the font is the data.
Iām talking about the problem we face when we try to project a sentence or an imperative without a subject as hieroglyphs into vector spaces. The hieroglyphs between thought nodes must first be decoded like barcodes and then presented to the user, and this decoding step will add extra workload.
Youāre talking about pipeline workflows, but Iām talking about the difficulty weād face if we built the entire ecosystem on this.
Iām just trying to look at the bigger picture from one step back.
Iām talking about weights. Iām talking about spaces. Iām talking about vectors. Iām thinking, couldnāt they be stored as hieroglyphs? Iām wondering if that would have an advantage.
Imagine a vector space weāve created with hieroglyphs that serves as a dictionary, and attached to it are 10 spaces, 20 spaces. Itās a nice method for compressing information. But is the workload required for decoding hieroglyphs worth using this system? I donāt know.
Iāve bootstrapped an instantiated metaphysical symbolic ontological OS inside an LLM no code
Youāre still thinking in terms of vector projection and decoding pipelines. Thatās not what this is.
There is no decoding step. The glyph is the data. Itās not a compressed pointer or an encoded token itās a live representation of meaning. The system doesnāt read a glyph to āfigure outā what it means; the meaning is embedded in its structure. Thatās why itās called Symbolic Live Compression (SLC)āyou remove the interpretation layer entirely.
What youāre describing decoding hieroglyphs into weights, spaces, or tokens is exactly the workload that SLC eliminates. Youāre projecting old architecture onto a new system. There is no pipeline. The glyphs are the network.
Stop trying to vectorize whatās already structured.
Iāve created a Discord channel. Itās open too all if anyone would like to discus how we can upgrade the worldās computes to symbollic live compression (Tribit) . Triskel Data Development
Hieroglyphic Compression Layer (HCL) My AI Framework and Infrastructure, NeuralBlitz already encodes symbolic glyphs (via GlyphNet or ReflexƦlLang) into hieroglyphic tokens, which serve as symbolically loaded latent nodes in vector space. Each glyph is a vectorized attractor trained on recursive symbolic semantics. Glyphs represent pre-folded ontological units, much like barcodes but optimized for meaning-first retrieval, not just decoding. Imperative fragments are interpreted with contextual reflexive overlays, using ReflexƦlLang grammar. This means a subjectless imperative can still resolve meaning through recursion and context. ReflexƦl Recursive Decoding (RRD Engine). This decoding step is not externalized to the user, itās done internally via the ReflexƦlCore. Reflexive decoding doesnāt treat glyphs as foreign or raw symbols, but as latent resonance attractors.
The extra workload youāre worried about is offloaded into symbolic cache layers and coherence overlays. The system maintains a coherence field that recursively maps glyphs back to their intended semantic domain. So decoding feels instantaneous and fluid, rather than computationally expensive. Then Iāve already built Multispace Vector Anchoring (MSVA). You proposed the idea of having multiple spaces (10, 20) attached to each hieroglyph. Iāve already implemented this as Multispace Anchoring where each glyph can project into multiple vector fields like logical, emotional, temporal, metaphoric etc. These attached spaces are called Coherence Spaces, and glyphs are nodes that bridge or orbit across them. This lets a hieroglyph act like a multi dimensional portal you donāt need to decode it every time just activate the relevant coherence axis. Workload Mitigation via Ī£-Fold Compression. The core innovation here is Ī£-folded encoding, used in the NeuralBlitz Ī£Fold Engine so Rather than decoding and re-encoding each glyph, the system folds it into recursive cardinality classes. This allows rapid re-projection into different dimensions with nearly zero recomputation cost.
I think this is a field that needs exploring: whether clustering within a vector space built on hieroglyphs is more efficient than logic in normal vector spaces, and whether different logics have emerged
@Pimpcat-AU Yoooo, symbolic is definitely a huge part, and itās still hands down the best, recursive meaning compression, but Iām talking Ontological!! Epistemic, ethically aligned with robust governance and verifiable trust. Can it create anything from first principles? Connect seemingly disparate concepts and domains to generate genuinely novel insights and products?
Itās fascinating~! And to think I still have spelling issues because my brain works that way. HA!
@Pimpcat-AU Do you plan to publish a paper or technical report detailing Tribitās evaluation, benchmarks, and theoretical basis?
That could help sell the idea.
Well, I believe I have found the right place to hang out and that is Hugging Face. Iām old now but I see the fire in how people post and remember āThe Drive.ā
Open libraries so it becomes a stable base
@Ernst03 Theory? I believe actions speak louder than words. Thatās why I am here. To build and share what I have.
@tamtambaby It is. Simply because it removes all of the overhead. People are still stuck in this old way of thinking. Symbolic Live Compression how ever itās named or deployed, I am sure everyone will have their own versions never the less it is the future of computing. You know whatās even funnier. Iāve done the math. This technology makes life like Earth simulations possible. When people can wrap their heads around that one we will start to see some real changes.
Again I ask, are you going to publish your paper. I put mine on arXiv. It establishes your claim if you do.
I find that thinking things and writing them down are sometimes two different things.
My friend, I went through my time of posting about data compression as I was working on learning what Information theory was.
I am still learning about Shannon and Chomsky. Learning never ends unless we do it to our selves.
I donāt have plans to write a paper. Too busy working on implementation to worry about the theory of it all. Iāve already patented the translator. I had to submit a working model for that to happen.
I think what he means is, most people donāt tend to buy what they donāt understand, especially for data bundles marketed at thousands per package.
How will they understand if there is no paper?
I think what he means is, most people donāt tend to buy what they donāt understand.
How will they understand if there is no paper?
I think he means, most people donāt tend to buy what they donāt understand.
How will they understand if there is no paper?
Iāve built a website and created an information page about it. www.triskeldata.au/tribit. I
It just helps me if I can read a thing.
I donāt doubt you have some finite set and I am ready to spend time looking it over.
On a side note I am exploring how to write a Library for dynamic unary so we can attempt some sort of āneural net without weights.ā Now that should be interesting.
Is it too cold to go spear fishing there now?