Hello everyone,
I need expert advice on some code I’m working on with Grok, Chatgpt, and Claude. This is an advanced normalization layer in PyTorch, designed to dynamically detect and correct bias in neural network activations. It combines learning-based normalization, multiscale projection, and KL divergence-guided correction with respect to a reference distribution.
I want to clarify that I’m neither an expert in this field nor even a computer scientist.
I’m contacting you because, according to all the AIs I’ve approached to evaluate this code, they all claim it has innovative potential in its way of combining multiscale projections and KL divergences.
I don’t understand everything this means, but I’ve had this code in my hands for months. Maybe it’s useless, or maybe it really is possible to do something with it to make AI more ethical. I can’t answer that question. Tell me what you think of the concept.
The code is open source on GitHub. Feel free to let me know if you’d like me to share the link with you.
The code hasn’t been tested, and I won’t be able to answer any technical questions.
PS: I’m French, so I apologize if there are any translation issues.