Dynamic Unary: A dynamic data type

Hello, I am Ernst.

I believe that if someone wants to learn AI, they should be where people are learning AI.

While I don’t yet have a full AI implementation, I can contribute something foundational: a dynamic data type grounded in cycle and iteration. Since many of us are interested in AI that feels more “alive,” perhaps dynamical mathematics plays a role in that pursuit.

A bit of backstory:

I was first drawn to iteration and looping the moment I opened the TI-99/4A BASIC programming manual. The concept of logical flow and cycles captured me immediately. Years later, while trying to encode binary so that parity would appear at a known position, I slowly realized that binary segments could exhibit dynamic behavior.

This led to the development of what I call Dynamic Unary Encoding—a system that sits between static and quantum information. It’s a unique number base with qualities like spin, which can manifest independently in constructs or within a defined computational “space.”

Link to my 2014 paper:

Dynamic Unary Encoding — Written during my “Welder-Ernst” phase and self-taught in writing a science paper from reading the internet. It’s not an easy read, it is scheduled to be rewritten now that I have AI to help tutor, but I’m here to answer any questions from the Hugging Face community. Besides if you all tell me where it’s confusing I can take notes and make sure to make clear. Hey we may have been missing aspects of mathematics all this time.

Let’s explore this together!
I humbly offer you dynamical numbers for your AI consideration.

Github

1 Like

I’m putting out a library of how to human you ai bud :joy::vulcan_salute:

1 Like

Okay! So this is my place on HF :heart_eyes:

I feel privileged to be among so many brilliant minds. I’ve worked on a number of things in academic silence for over three decades, but I’m new to producing polished “work-product” and sharing it publicly. With the help of ChatGPT, I’m now writing my first OpenCL kernel, which will become part of a library for Dynamic Unary Encoding (DUE).

:brain: What is Dynamic Unary?
Dynamic Unary Encoding reads runs of same-parity bits and encodes them into a terminus-unary form using a parity reference position. For example, if the parity reference is b0 and that bit is reset, the encoded form becomes a run of opposite-parity bits followed by a terminus that matches the parity of b0.
So, for the input 111 (three set bits) and parity reference b0 = 1, the encoding becomes 001.

This process effectively forms a dynamic data type from binary segments. It operates as a discrete limit cycle.

“Dynamic Unary partitions the 2ⁿ binary patterns into disjoint cycles.”

Now is a great time to ask questions or offer suggestions for what the library should include. I’m still very green here, but… we can’t learn to swim unless we get in the water. :ocean:

Thanks for reading, and I’m excited to contribute!
I’m excited — and a bit overwhelmed — to be doing GPU programming. I’ve never done it before.

—Ernst

1 Like

Reconciliation of Parallel GPU Programming and CPU Programming

Well, replying to myself is something I do a lot, it seems, but wow, am I having to reconcile parallel GPU programming with CPU programming.

Okay, my time with ChatGPT was productive. No, we did not get a work product done yet, but imagineering starts with the imagination.

The Library Design

The Library shall have parallelism in mind. Here’s how it will break down:

  1. Native Host Threads:

    • Initially, the library will run native host threads for smaller tasks where CPU is sufficient.
  2. GPU Parallelism:

    • If the task demands more parallelism, we will leverage the GPU to handle the computation more efficiently.
    • Multiple DUO updates will be processed simultaneously on the GPU, scaling up the parallelism.
  3. Large DUO Handling:

    • For very large DUOs, there will be a dedicated function call that handles this specialized processing, ensuring that large datasets are split and processed efficiently.

Modes of Operation

  • Host Threading: Parallelism will be done up to the thread level on the host.
  • GPU Parallelism: For more intensive tasks, we will use the GPU to handle multiple DUO updates.
  • Large Data Processing: There will be a specific mode to handle large datasets where data will be partitioned across multiple work units for more efficient computation.

Do you have any suggestions?

I need to put up a second whiteboard I can tell.

1 Like