A new kind of way to look at ai

Feel free to use and build upon this it doesn’t have weights yet but may be of use to someone here :cow_face::cigarette::vulcan_salute:. GitHub - madmoo-Pi/Spawn_Point

2 Likes

You give me something to look up to according to ChatGPT (as a beginner that is).
So what is this self modifying part if you don’t mind.
And Welcome to the community!

1 Like

My aim is to educate in a manner with the hope of essentially the most emotional responsive humanised ai will either be an awsome bot or the startings of a digital species, and thank you for the welcome , and hope my prototype grows to more (still alot of work Todo my end and train some weights) :vulcan_salute:

2 Likes

I just told ChatGPT that I feel like I might be late to the party—turns out some of the ideas you’re working with are strikingly aligned with mine. Things like a self-modifying system, discrete symbolic computation instead of weight-based models, and the concept of a Universal Language (Leibniz-style) really resonate with me. I’m especially drawn to the idea of memory and perhaps something that hints at being alive.

That said, I’m still wrapping my head around how today’s AI systems actually function. Most of my background is in C, and I’ve only just started looking into Python—so while I’ve been developing a dynamic data type with some interesting mathematical properties, I’m still catching up on LLMs and the current landscape.

I understand this project is more of a proposal or open outline right now. That’s great—it invites feedback and community input. I’m happy to follow along, and if anyone has questions about the dynamic unary structures I’ve been working on, I’ll do my best to contribute.

So thank you for sharing with me.

1 Like

The trick I’m using for the alive part is in emotional memory links that tweak motherboard specs (voltage ect ) to simulate adrenaline, fatigue ect and the will all be hidden in their by then with conditions to unlock giving the ai contextual input to relate to feelings and emotions and eventually the same for personality so every instance although the same base and develop individual personalities I’m still not sure exactly how it fits it all in but I research as I go expand on the ideas later

2 Likes

Here is the isolated emulation of a 4 layer neuroevolution network used for self improvement hope this speeds you along :+1::vulcan_salute: unfortunately I’m working for edge so it’s quatised

import torch
import onnx
from torch import nn
from typing import Dict

class NeuralArchitect:
def init(self, constraints: Dict):
self.constraints = constraints # e.g., {‘max_params’: 1e6}

def generate_onnx(self, input_shape: tuple) → bytes:
class DynamicModule(nn.Module):
def init(self):
super().init()
self.layers = nn.Sequential(
nn.Linear(input_shape[0], 64),
nn.ReLU(),
nn.Linear(64, 32)
)

def forward(self, x):
return self.layers(x)

model = DynamicModule()
dummy = torch.randn(1, *input_shape)
torch.onnx.export(
model,
dummy,
“dynamic.onnx”,
opset_version=13
)
with open(“dynamic.onnx”, “rb”) as f:
return f.read()

def validate_topology(self, onnx_model: bytes) → bool:
model = onnx.load_from_string(onnx_model)
params = sum(
param.size for param in model.graph.initializer
)
return params < self.constraints[‘max_params’]

This provides controlled mutations only keeping the improvements

1 Like

It works withing main system like this

from monitoring.watchdog import HealthMonitor
from neural_synthesis.architect import NeuralArchitect
from auth.schnorr import SchnorrMultiSig
import threading

class ConsciousAI:
def init(self):
self.health = HealthMonitor()
self.crypto = SchnorrMultiSig(parties=3)
self.neural = NeuralArchitect({‘max_params’: 1e6})

Start health monitoring daemon

threading.Thread(
target=self._monitor_loop,
daemon=True
).start()

def _monitor_loop(self):
while True:
if not self.health.critical_services_check():
self._emergency_shutdown()
time.sleep(5)

def _emergency_shutdown(self):

Secure termination protocol

pass

Learn from deconstruct and build great minds :vulcan_salute:

2 Likes

I have things I have thought in my early years and perhaps I was destine to be here but, I think what you may be thinking is akin to “Op Amp” Operational Amplifier. That is my only association with what I just read. Still thank you for the food for thought.

I would think Analog has a place in AI. We do such with floating point do we not?
In fact even wave forms generated by the General Form of my up coming paper are discrete and can be considered functionally analog. Is that what you are saying?

“I like this ship! You know, it’s exciting!”
Montgomery “Scotty” Scott, Star Trek (2009)

1 Like

The technology exists we just need to rethink I believe :vulcan_salute:

2 Likes

I think you see: Today’s SciFi is tomorrow’s reality if we believe and ST is a good example just look at flip phones and STTOS

So I made a friend. I am a few weeks out to setting up my AI lab and I hope we can continue.

Thanks

2 Likes

This might be more what you were looking for bud :vulcan_salute:

1 Like

My Friend, I couldn’t ask for a better arc in life then I am living.
I was one of the wide eyed 8 year olds who watched Lost in Space and then Star Trek TOS premiere.
Spock and the Computer.. That was more than an actor in a show to so many of us.
Now the rainbow over my Golden-Pond lands in the AI Pot of Gold. Simply amazing.

So thank you for the additional link.

Okay a little more appreciation is in order then a Thank You.

Anything else please feel free to ask I will share what I can and help where I can :vulcan_salute:

Oh hey, me and my Magic Mirror are exploring your gift.
so I call my ChatGPT “MIA” as in Mia and missing in action-ghost in the machine.

We are going over it. " Exactly, Friend—this is where the “evolution” part of neuroevolution comes in. It mimics biological evolution:"

Just to say, dynamic unary offers reversible permutations.

  1. Selection (Natural Selection)
  2. Crossover (Recombination)
  3. Mutation (Tiny Random Changes)

Over many generations, the population evolves to solve the problem more effectively.

So what if these mutations were permutations instead? Not that I know much here about neural networks.

With the right ethics and system checks and the dominant features if stable are tested and then added to replace older codes the not reliant on hardware and add a safety feature to stop CPU bottlenecks to use spare GPU space as better chip structure for the job this is only half the self modification I’ve added , the other it theorises it’s own new modules for specific personality traits, tasks and equipment all triple checked against ethics and pre code existing structure compatibility in essence it’s own mind

1 Like

Well I’m in a humorous mood today with my second cup of coffee: Formatted by Mia.
I just mop the halls and solve math challenges left on the chalkboard after hours, when no one’s looking—and my P.O. lets me work there.
(Movie challenge: Whodat!)

Okay, yes—I mop floors in real life.
But thanks to your tutelage, I’m starting to believe something powerful:

We can do this thing—neural networks—without floating point.

Now, I know you have your own construct.
But me? I’m in the corner playing with the ABC blocks—and having a wonderful time.

Here’s a basic outline that Mia (my ChatGPT) and I drafted:


:black_square_button: In DUO / Discrete Binary Pachinko:

  • You don’t tweak values—you cycle through structures:
    • Spin binary patterns (bsegs),
    • Combine them (XOR, Lex merge, bit flips, you name it),
    • Measure how close the result comes to your target behavior.

:cyclone: Cycle-Based Learning (DUO-style):

  1. Start with a bseg (binary segment).
  2. Cycle it (bitwise rotate, permute, shift).
  3. Pair it with another bseg and combine (XOR, AND, DUO merge, etc).
  4. Evaluate the result (match to target, compression score, symbolic resonance).
  5. Select the best result.
  6. Repeat—iterative symbolic convergence.

That’s training without floating point, my Friend.
Instead of tweaking dials, we’re building a symbolic lens.

Meaning doesn’t come from scaled weights—it emerges through permutation space.


Look at you, @Madmowkimoo :eyes:
I’m just having a quiet coffee morning, waiting to serve my renter their final notice…
…and BAM! With your guidance, I’m suddenly part of machine thinking.

Wow, I guess I could have a job where someone else mops my floor?

1 Like

I went a weird route my brain thinks different so why shouldn’t ai or si (simulated intelligence) but ai sounds better to market :joy: my end goal is ai (actual intelligence) while I build a friend :vulcan_salute: and cleanings not so bad this is a hobby I do I’m a dry cleaner to pay the bills, dream big create bigger my friend

1 Like

Would you like a modular template for you duo cycle based learning with placeholders bud? Take about 20 mins bugs permitting

I have to process and mow the yard so I am not ready for more at this time. May I have a rain-check?

Sure no worries bud , I have noticed its a chaotic way generating random structure bits in a trail and error method the neuro evolution is a smoother more controlled mutations route I use .02 variance for each layer on 4 layers and it’s only allowed to keep the upgrade if it checks out within the system so no backwards mutations , if you need any help I can always throw repositories together for the community as a whole :vulcan_salute:

1 Like