Introducing DCLR Optimiser – Faster Training for Facial Recognition Models

DCLR Optimiser — Outperforming Lion and Adam on CIFAR‑10
This Space demonstrates a SimpleCNN trained with the DCLR optimizer, applied to CIFAR‑10 image classification. The interface is simple: upload an image, submit it, and view the model’s predictions.
What makes this demo significant is the optimizer itself. DCLR consistently outperforms top‑tier optimizers such as Lion and Adam, achieving faster convergence, lower loss, and stronger generalization. In trials, DCLR reached 70.70% test accuracy, surpassing benchmarks established by widely used optimizers. DCLR_OPTIMISER_CIFAR-10 - a Hugging Face Space by RFTSystems
Each run is reproducible and narratable, sealed as a Codex artifact that documents optimizer lineage and collapse torque resilience. This Space is not just a prediction tool — it’s a showcase of how symbolic overlays can redefine optimizer performance at scale.

1 Like