AGI, AI, DL, ML… What’s the Difference?

Jamie McGowan
The Startup
Published in
7 min readFeb 24, 2021

--

Photo by Pietro Jeng on Unsplash

In this fast moving, ever growing world of self-driving cars, fake news and chat-bots it is easy to get lost in all the jargon.

So, here I will try to clarify for the general reader what each of these terms mean and where they overlap.

We will start small and work our way up to the big one, Artificial General Intelligence (AGI).

Machine Learning (ML)

The preferred term used by most AI researchers when explaining their work is Machine Learning.

Why?

Mostly because it is so damn self-explanatory.

I mean… what is it? It’s when a machine learns something!

See how easy that was? As a scientist it is very rare you come across a word that means what it says so this is a very good place to start.

Anyway, I said that I would explain what it is in detail so I will give some insight into this here.

Machine Learning was first termed in 1959 and is used to describe when a computer, without human intervention, can reach a desired output on its own terms.

Photo by Isaac Smith on Unsplash

There are a couple of subtleties I have neglected in writing that statement but for the general interested reader this is the basic idea.

An example I like is, imagine if I gave you 5 data points arranged in an approximate line and asked you for a line of best fit. The first thing you would do is get your ruler out and start waving it around over the data points to find a good fit.

This is what machines do when given this task, first the machine will randomly place its ruler down on the page and then ask… “Where must I move this ruler to get a better fit to the data points?”.

It decides this by using some relatively simple mathematics, by measuring the distance the ruler is away from the points and which direction they are in. Then it takes this information and uses it to move the ruler closer to where it needs to be.

For humans, this task is fairly intuitive, to the point where it is almost automatic. But, if you sit and try to decode every little thing your brain does in this task you will arrive at a very similar algorithm to the computer.

In doing this over and over again, each time moving the ruler in increments, the machine will learn the best fit for our data.

This is arguably the simplest example of ML.

Deep Learning (DL)

Deep Learning, how cool is that name. If these words had emoji’s attached, DL would definitely be 😎 .

But what does it mean I hear you ask.

It means learning but deep? No, that doesn’t really work… See what I mean about ML being a rare find in science?

The way that I like to think of, and explain what deep learning is, is through something we can all (hopefully) relate to… The brain.

Photo by Ashkan Forouzani on Unsplash

The concept of Deep Learning is built from the idea that we can build complexity from simplicity. So for example, if I gave you some lines, you could arrange these into a more complex square or even a cube shape. Or if I gave you enough bricks, in theory you could build a house, or more importantly, you could learn to!

And that’s exactly what computers do, they start with very simple structures at their lowest layer (the lines/bricks) and build these up by combining them into more complex concepts (squares/walls) until eventually at the top layer we have the desired output (a cube/a house).

This can be difficult to imagine but your brain does this with pretty much everything you experience. To explain how we will briefly describe the visual cortex.

In your eye, you have lots of cells which have a specific spot in your vision to observe. It is not one cell which observes the words you are reading right now, it’s a huge collection of cells.

It is then the job of your brain to combine each little bit of information from each cell into the actual words you see. Relating this back to earlier, your brain goes through many steps of combining this information to give you the highly complex and rich visual information you are seeing right now!

Pretty incredible right?!

It is important to note that the field of Deep Learning is a subset of Machine Learning which we described above. So, all DL is ML, but ML is not necessarily DL.

Artificial Intelligence (AI)

If Deep Learning is a cool name then AI is up there with the greats!

The 21st century way to sound clever is no longer mentioning “Rocket Science”, its dropping the odd “Artificial Intelligence” into a conversation.

But honestly, AI has been around for over 150 years. It is a very old concept which was first conceived with the idea of programmable computers. (Check out Ada Lovelace for this, she’s a boss and deserves to more of a household name).

Just how ML contained DL above, AI contains both of these subjects along with many others.

AI is therefore far more complex and also far simpler than people think when they hear it.

Technically an AI could be a simple calculator, nothing too fanciful, just a set of defined rules which are obeyed depending on the users input. On the other hand it is also used to describe the technology in self-driving cars.

It is therefore hopeless for me to try and explain what AI is, as its too many things to mention, so instead I will outline the challenges in AI (also ML and DL).

Rather counter intuitively, the field of AI is excellent at solving intellectually difficult tasks for humans, such as our calculator example above. However when it comes to simple intuitive tasks for humans, AI tends to fall short.

Photo by Denin Lawley on Unsplash

Arguably, this is not actually a problem with the technology, it is a problem with us. We find it very easy to recognise a car in an image, from a multitude of angles and with any number of backgrounds. And rather extraordinarily, a child only has to be shown a few examples of a car before it can do the same.

However, computers require more formal description to be able to do this and we find it very difficult to provide this.

To formally describe a car you have to be able to describe every situation you would ever see a car in, with or without wheels, etc.

Imagine that, if we had an AI mechanic that had only ever seen a car with wheels on, it might go to replace the wheels and as soon as it takes them off, it won’t recognise that it’s dealing with a car anymore.

These problems can be tackled with data of course, by showing millions of examples of cars to an AI in multiple situations, you can minimise the risk of this. But inevitably this can only go so far.

One of the main challenges is understanding this difference between humans and AI, why is there such a difference in how we learn and how can we improve the actual learning of an AI.

Artificial General Intelligence (AGI)

You made it!! Or maybe you didn’t and I’m just talking to myself at this point?

Either way, we’ll continue and hope for the best.

The last section was a bit out there, but I hope you got an idea of some problems of AI and why it can be a difficult subject.

Photo by Possessed Photography on Unsplash

Now, AGI is a less used term than AI as it’s not really a field of research. It’s an end goal for the field of AI.

We looked at an example of AI and briefly mentioned our AI mechanic. We will go back to this but first I’ll describe what your standard General Intelligence is (spoiler: it’s you!).

General Intelligence is when a single system (a human for example) can be used for any number of tasks and also learn new tasks with relative ease. This is easy for us, we can learn a new hobby for example and add this to our skill set.

For computers this is not so easy, although we have some highly complex DL models to do some incredible tasks, they are nowhere near the complexity of the human brain. So if we take these AI systems and try to get them to do new tasks, they tend to forget what they already knew.

The reason why this is an end goal (although these goalposts are a moving target) is because it would enable AI scientists to draw knowledge and experience from a huge number of complex tasks, then deploy a single AI system into a task it had never seen before and have it perform well without affecting any previous experiences (or memory).

This is a very big idea and is seemingly centuries away from being a reality however these fields have been known to exceed expectations and progress in huge leaps within a few years, so maybe it could be decades? No one really knows!

Thanks for reading this article and I hope it gave you a bit of an insight into these fields and enough knowledge and a few real world examples that you can use to spread the word!

--

--

Jamie McGowan
The Startup

Research Scientist at MediaTek Research. Postgraduate PhD student in Theoretical Particle Physics at UCL, London.