AI, machine learning, and deep learning: What they are and how they differ


Artificial intelligence is no longer the stuff of science fiction flicks. It’s a reality, and chances are you’re interacting and being impacted by AI technology-powered applications every day. AI seems to be the phrase on everybody’s lips these days, right from makers of autonomous trucks that can travel thousands of miles without requiring human intervention to truck drivers who fear they’ll be out of a job if these AI-powered trucks make it to the roads. In 2016, Google’s DeepMind AlphaGo program competed against Lee Se-dol, South Korean master of the board game Go, the program emerged victorious. Media coverage used terms such as AI, machine learning, and deep learning interchangeably as if they all meant the same thing. The truth is, they don’t. Of course, all three technologies were responsible for AlphaGo’s victory in their own way, but they are all different. And probably if it were an AI-powered press release writer program, this mistake might not have happened!

To make sure that you can make sense of the latest revolutions from the world of technology, it’s crucial that you understand the nuances that separate AI, machine learning, and deep learning. So read on.

AI: Technology that makes machines behave like intelligent humans


Of AI, machine learning, and deep learning, AI is the broadest set that encompasses technologies related to advanced computer intelligence. AI has its roots back in 1956, when the Dartmouth Artificial Intelligence Conference practically coined the term with a generally accepted meaning. The basic idea expressed in the conference was that every aspect of human intelligence could be described in such a precise manner that the behavior could be simulated using computer programming.

Three stages or categories of AI

Narrow artificial intelligence: This is the term used to refer to programs, algorithms, and technologies that can simulate human-intelligence level behavior, but only for a specific task. For instance, chess-playing robots such as IBM’s DeepBlue are an example of narrow artificial intelligence, which is task specific.

Artificial general intelligence: This term refers to a level of computer intelligence that is at par with human intelligence, across a range of tasks. We’re still far from achieving artificial general intelligence in computers, even in the most high-tech laboratories.

Super-intelligent AI: This is a look into the future — the level of artificial intelligence that encompasses scientific thinking, creative outlooks, and general wisdom to the extent that the machine possessing it could supersede humans as the most intelligent “beings” on the face of Earth.

Machine learning: A viable approach to enable AI

Machine learning is best envisaged as a subset of artificial intelligence. The basic principle of machine learning algorithms is that they can use large volumes of data to detect patterns, and then make decisions based on these patterns. This is a major advancement over the traditional method of using conditional coding, wherein all possible scenarios are captured in the code, along with the definitions of the subsequent behaviors the code needs to simulate. Machine learning, abbreviated ML, has been a major shot in the arm for artificial intelligence-powered applications. In fact, the reason why AI took almost 60 years for fruition is that ML algorithms were developed only very recently. The advancement in ML algorithms along with the surge in Big Data capabilities of IT together are the fuel for AI.

ML algorithms leverage advanced decision-making concepts such as decision tree learning, clustering, inductive logic programming, reinforcement learning, Bayesian networks, and several others. These algorithms consume large volumes of data and use the patterns, outcomes, comparisons of outcomes with expectations and feedback loops to become “smarter” with time. To put things in perspective, IBM’s Deep Blue computer was coded for all its capabilities, whereas Google’s DeepMind AlphaGo algorithm used large datasets consisting of board game moves to “learn” and eventually defeat its human opponent.

Deep learning: ‘Neural’ way of implementing machine learning

Deep learning is a subset of machine learning. The core of deep learning is associated with neural networks, which are programmatic simulations of the kind of decision making that takes place inside the human brain. However, unlike the human brain, where any neuron can establish a connection with some other proximate neuron, neural networks have discrete connections, layers, and data propagation directions.

Just like machine learning, deep learning is also dependent on the availability of massive volumes of data for the technology to “train” itself. For instance, a deep learning system meant to identify objects from images will need to run millions of test cases to be able to build the “intelligence” that lets it fuse together several kinds of analysis together, to actually identify the object from an image.

Let’s revisit the example of Google’s DeepMind AlphaGo program that beat a human master of the board game Go in March 2016. The technology, in this case, used Monte Carlo tree search along with neural networks programming to bring about the landmark outcome. Potential applications of deep learning-based systems are being explored in spaces such as financial fraud detection, malware and spam detection, handwriting recognition, speech recognition, image search, street view detection, text-based searches, and translations.

The cohesive AI system

Because deep learning has been instrumental in enabling a lot of successful ML applications, which in turn has furthered the coverage and success of AI in general, these three technologies are best viewed as a coherent and symbiotic ecosystem where one layer feeds to another’s success. Deep learning is proving to be a highly viable methodology of realizing machine meaning, and advanced ML algorithms are making advanced AI applications a reality.

Deep learning is instrumental in breaking down tasks into small steps, to the extent that all kinds of algorithm-driven and machine-assisted operations become possible. The present belongs to AI, machine learning, and deep learning, and the future was never this exciting.

Photo credit: Pixabay