This is true even outside the realm of AI, but it’s especially true when it comes to AI…
Generally speaking, understanding a particular field requires an interweaving of knowledge and experience spanning multiple disciplines, but it’s difficult to learn so much all at once. On top of that, in the field of AI, the scope of what AI encompasses changes daily—and in most cases, it keeps expanding…
It’s probably best to start with something concrete, no matter what it is. Even if experience isn’t everything, everyone needs a starting point.
Yes. A lot of people feel exactly this way.
What makes AI confusing at the beginning is that you are not learning one subject. You are learning several stacked layers at once: programming, data, models, prompts, APIs, evaluation, and sometimes deployment. Official beginner paths reflect that reality. Google’s Machine Learning Crash Course separates prerequisites, core ML, data, neural networks, embeddings, LLMs, and production systems, and explicitly recommends going through the material in order if you are new. Hugging Face’s LLM course also starts by placing learners inside an ecosystem of multiple libraries and tools, not just one concept. (Google for Developers)
So the “chaos” is real. It is not just you being overwhelmed for no reason. You are seeing the full stack before you have a map of how the pieces relate.
There is also a learning-science reason this feels bad. Cognitive load theory says learning depends on building and automating mental schemas, while working memory is limited. When you are new, the schemas are still incomplete, so too many moving parts create overload. The National Academies’ work on learning and transfer makes a similar point: real learning is not just exposure to information, but being able to extend what you learned to new situations. Early on, people often consume a lot of material without yet being able to transfer it into building. (PMC)
That is why your shift toward building small things matters so much. It changes learning from passive collection into active transfer. Practical courses lean in this direction on purpose. fast.ai is explicitly built around applying deep learning and machine learning to practical problems, and its lessons push hands-on notebook work instead of passive watching. Research on worked examples also shows that novices usually learn better with guidance than with open-ended problem solving alone, because guidance reduces unnecessary cognitive load. Project-based learning can help too, especially when projects are scoped well and tied to real problems, though the benefits depend a lot on the design of the project. (Practical Deep Learning for Coders)
A simple way to make the field feel less random is to give each piece a job:
- Python is the glue.
- Models are the engine.
- APIs and libraries are the interface.
- Prompts are instructions.
- Math explains why the engine behaves the way it does.
- Evaluation tells you whether it actually worked.
Once each part has a role, the subject stops feeling like a pile of disconnected vocabulary.
The other big mistake beginners make is trying to learn every layer at the same time. In practice, “using AI,” “understanding AI,” and “training AI” are different activities. Even official resources separate them. Google’s LLM module assumes earlier familiarity with intro ML, datasets, overfitting, neural networks, and embeddings. The Hugging Face ecosystem similarly spans usage, tooling, and model internals. That means it is normal for the field to feel blurry if you jump into the middle. (Google for Developers)
The approach that usually works best is much narrower:
-
Pick one lane first.
For example: “I want to build a tiny text app,” not “I want to understand all of AI.”
-
Use one stack for a while.
For example: Python + one API, or Python + PyTorch + Hugging Face.
-
Build small, complete things.
A summarizer. A classifier. A tiny chatbot over a few documents. A script that extracts structured fields from text.
-
Learn concepts just in time.
Study embeddings when you need retrieval. Study training loops when you need fine-tuning. Study attention when you want to understand transformer behavior better.
-
Measure progress by artifacts, not by hours spent reading.
One working script teaches more than ten saved tutorials.
That sequence matches how the better beginner materials are structured. PyTorch’s beginner content, for example, focuses on a complete workflow: data, models, training, and saving. It is much easier to attach theory to a workflow than to isolated notes. (PyTorch Docs)
So yes, getting stuck like this is common. The beginning feels chaotic because the field is layered, the prerequisites are partly hidden, and beginners do not yet have a compact mental map. The way out is usually not “understand everything first.” It is “shrink the scope, build something real, and let the concepts attach themselves to the work.” (Google for Developers)
A good one-line summary is this:
AI stops feeling chaotic when you stop treating it like one giant subject and start treating it like a series of small systems you can build, test, and understand one at a time.