Base consciousness to build from feedback welcome
I took the time to review your framework for simulating artificial consciousness and wanted to share my thoughts. While the architecture you’ve described is intriguing and well structured, there are a few key points worth discussing, particularly around the claims of artificial consciousness.
While the framework presents an organized structure for simulating sensory experiences, memory, and even meta-cognitive thoughts, the step from “simulation” to true “consciousness” remains a significant leap. For true artificial consciousness, we’d need tangible, observable evidence of self awareness, intentionality, and the ability to adapt or reason in ways that reflect understanding, not just programmed responses.
At this point, the framework is largely theoretical. The components you outline (qualia generation, autobiographical memory, ethical constraints) seem to be more about mimicking human cognitive processes rather than demonstrating actual conscious thought. Proof of artificial consciousness would require reproducible experiments, clear benchmarks, and observable behaviors that point towards a genuine awareness or reflective state of mind something that could be empirically tested and verified by the wider community.
Given the high level claims of “simulated consciousness,” I’d recommend presenting empirical results, data, or even case studies showing how the system works in practice. For example, how does the system deal with unexpected situations? What happens when it’s faced with complex ethical decisions? These kinds of questions would go a long way in substantiating the claim of consciousness, or at least deep intelligence.
Many of the concepts you’re introducing, such as memory storage and qualia generation, are philosophically fascinating. But as far as practical AI development goes, we need to be mindful of the distinction between theoretical modeling and real world applications. Until we can demonstrate behavior that mirrors conscious awareness, it’s critical to remain grounded in the current limits of technology and avoid overstating what the system is capable of.
I appreciate your work and your enthusiasm for this space, but I think it would be beneficial for the discussion to focus on observable outcomes and practical demonstrations of how this system functions, rather than simply asserting that it has achieved something as complex as consciousness.
Best regards,
Triskel Data Deterministic Ai.
You seem worthy of the philosophical debate I have many layers to humanising an ai I have many ideas to upload and would like a mindful opinion on most for example this is a big humaniser for ai it’s a prototype currently but I am elbow deep in it currently , please check out the other weird things on the git hub I’m learning lots fast
Does this work better almost ready GitHub - madmoo-Pi/Ai-foundations-
ChatGPT is taking you for a ride mate.
My brain is fried been up all night tinkering as good as I’m gonna get without something stronger than coffee night all and enjoy/ use whatever is usefull here please note DeepSeek R1 logic is copyright under licence the rest you welcome to use freely
conciousness is geometric. the proof is in sacred geometry, fractals. Its also a quantum holographic phenomenon.
It’s interesting seeing the subjective language evolution AI/LLMs are taking each of yall on.
It’s quite clear you all have spent some time interacting with AI/ChatGPT and it’s generations are effecting each of their users own language (symbolics, fractal geometry, quantum holographic, etc are all LLM research inspired/field specific jargon that consistently appear in ChatGPT and other LLMs, but the meaning behind these words are being evolved per user interaction).
Might be worth translating your thoughts so others can better understand your frequency as the alternating jargon and metaphors are causing you all to be stuck debating and arguing over how to label similar concepts instead of moving forward.