How can you tell if an algorithm is actually "learning"?

Hi all!

First post here, so I’m going with a basic, more philosophical question.

How can you tell if an algorithm is actually “learning” anything at all?

What is the “learning” part of machine learning in the first place anyway?

Yeah yeah, I know, Markov processes and whatnot. Well, that means almost nothing to me at the moment so can someone please put it in plain English?

What about more probabilistic approaches?

Thanks!
Noob.