Hi everyone! ![]()
I’m Vishnu, I have been experimenting with Deep Learning recently. And what I’ve noticed is that solving the same mathematical problem in the traditional ML route and solving it by using multiple layered (deep learning) ANN’s bring way better accuracy score ( and satisfaction too)
So, I created a notebook and I thought I’ll share it so I might get feedback from you.
So I put together a small notebook where i tried to make a deep learning ANN model solve the same problem that was solved by a ML model. And I have also kept the accuracy levels received in the end of the notebook.
Here’s the notebook:
https://www.kaggle.com/code/ruforavishnu/random-experiments-deep-learning-notebook1
In this notebook I:
-
Explored how using adding multiple layers to your ANN solves the problem better than traditional ML models
-
Logged a few observations while running these experiments
-
Kept it simple on purpose — mainly for learning and building intuition
I’d really appreciate any kind of feedback, including:
-
Ideas for experiments and project ideas I should try next
-
Any mistakes or inefficiencies you notice
Thanks in advance! ![]()
- Vishnu