Home / Tech News / Machine learning? Neural networks? Here’s your guide to the many flavors of A.I.

Machine learning? Neural networks? Here’s your guide to the many flavors of A.I.

[ad_1]

machine learning

A.I. is everywhere at the moment, and it’s responsible for everything from the virtual assistants on our smartphones to the self-driving cars soon to be filling our roads to the cutting-edge image recognition systems reported on by yours truly.

Unless you’ve been living under a rock for the past decade, there’s good a chance you’ve heard of it before — and probably even used it. Right now, artificial intelligence is to Silicon Valley what One Direction is to 13-year-old girls: an omnipresent source of obsession to throw all your cash at, while daydreaming about getting married whenever Harry Styles is finally ready to settle down. (Okay, so we’re still working on the analogy!)

But what exactly is A.I.? — and can terms like “machine learning,” “artificial neural networks,” “artificial intelligence” and “Zayn Malik” (we’re still working on that analogy…) be used interchangeably?

To help you make sense of some of the buzzwords and jargon you’ll hear when people talk about A.I., we put together this simple guide help you wrap your head around all the different flavors of artificial intelligence — If only so that you don’t make any faux pas when the machines finally take over.

Artificial intelligence

We won’t delve too deeply into the history of A.I. here, but the important thing to note is that artificial intelligence is the tree that all the following terms are all branches on. For example, reinforcement learning is a type of machine learning, which is a subfield of artificial intelligence. However, artificial intelligence isn’t (necessarily) reinforcement learning. Got it?

So far, no-one has built a general intelligence.

There’s no official consensus agreement on what A.I. means (some people suggest it’s simply cool things computers can’t do yet), but most would agree that it’s about making computers perform actions which would be considered intelligent were they to be carried out by a person.

The term was first coined in 1956, at a summer workshop at Dartmouth College in New Hampshire. The big current distinction in A.I. is between current domain-specific Narrow A.I. and Artificial General Intelligence. So far, no-one has built a general intelligence. Once they do, all bets are off…

Symbolic A.I.

You don’t hear so much about Symbolic A.I. today. Also referred to as Good Old Fashioned A.I., Symbolic A.I. is built around logical steps which can be given to a computer in a top-down manner. It entails providing lots and lots of rules to a computer (or a robot) on how it should deal with a specific scenario.

Selmer Bringsjord
Selmer Bringsjord

This led to a lot of early breakthroughs, but it turned out that these worked very well in labs, in which every variable could be perfectly controlled, but often less well in the messiness of everyday life. As one writer quipped about Symbolic A.I., early A.I. systems were a little bit like the god of the Old Testament — with plenty of rules, but no mercy.

Today, researchers like Selmer Bringsjord are fighting to bring back a focus on logic-based Symbolic A.I., built around the superiority of logical systems which can be understood by their creators.

Machine Learning

If you hear about a big A.I. breakthrough these days, chances are that unless a big noise is made to suggest otherwise, you’re hearing about machine learning. As its name implies, machine learning is about making machines that, well, learn.

Like the heading of A.I., machine learning also has multiple subcategories, but what they all have in common is the statistics-focused ability to take data and apply algorithms to it in order to gain knowledge.

There are a plethora of different branches of machine learning, but the one you’ll probably hear the most about is…

Neural Networks

If you’ve spent any time in our Cool Tech section, you’ve probably heard about artificial neural networks. As brain-inspired systems designed to replicate the way that humans learn, neural networks modify their own code to find the link between input and output — or cause and effect — in situations where this relationship is complex or unclear.

Artificial neural networks have benefited from the arrival of deep learning.

The concept of artificial neural networks actually dates back to the 1940s, but it was really only in the past few decades when it started to truly live up to its potential: aided by the arrival of algorithms like “backpropagation,” which allows neural network to adjust their hidden layers of neurons in situations where the outcome doesn’t match what the creator is hoping for. (For instance, a network designed to recognize dogs, which misidentifies a cat.)

This decade, artificial neural networks have benefited from the arrival of deep learning, in which different layers of the network extract different features until it can recognize what it is looking for.

Within the neural network heading, there are different models of potential network — with feedforward and convolutional networks likely to be the ones you should mention if you get stuck next to a Google engineer at a dinner party.

Reinforcement Learning

Reinforcement learning is another flavor of machine learning. It’s heavily inspired by behaviorist psychology, and is based around the idea that software agent can learn to take actions in an environment in order to maximize a reward.

As an example, back in 2015 Google’s DeepMind released a paper showing how it had trained an A.I. to play classic video games, with no instruction other than the on-screen score and the approximately 30,000 pixels that made up each frame. Told to maximize its score, reinforcement learning meant that the software agent gradually learned to play the game through trial and error.