Just the facts: These five AI myths need to die right now

When a technology is as popular, wide in scope, and hyped as artificial intelligence, it soon becomes the talking point in the context of every business problem. In a recently published study by Accenture, 42 percent of enterprise executives agreed that AI would be the prime driver of innovation by 2021. Artificial Intelligence (AI), apart from being one of the most exciting vistas of technology today, is also one of the most misunderstood technologies. Many people don’t really understand how AI is different from machine learning (ML) and how ML is different from deep learning. Then, there are giant AI myths that need immediate slaying. And that’s what we’re going to do for you in this guide.

Myth #1: Artificial intelligence will make most humans jobless

AI myths
Pixabay

Let’s take the transportation industry as an example. It’s one of the major employment generators in the United States (in terms of a number of people employed). Artificial intelligence is a big discussion point in this industry, with rapid progress in the ideas of self-driving trucks and unmanned aerial vehicles. So, the impact of artificial intelligence on changing the employment dynamics of an industry can’t be ignored. However, it’s wrong to oversimplify things.

A straightforward transfer of labor from humans to machines isn’t going to happen. Artificial intelligence is certainly transforming employment in general, but it’s not necessarily bad for humans. Just like the industrial revolution transformed employment from purely physical labor on farms to more organized labor in factories, AI will also bring in positive transformations.

Enterprises, hence, need to look at artificial intelligence as an enabler toward making workforces more efficient and helping humans do more value-adding work in general.

Myth #2: Throw lots of data at AI and it gives you knowledge

AI myths
Pixabay

That would be true for a time-tested AI program working within well-defined constraints. However, as a general statement, it’s just another of our AI myths.

Any artificial intelligence-powered program is dependent on algorithms for its intelligence. These programs are heavily dependent on “good” data to create meaningful information out of data. Bit “good” data is a very ambiguous term.

  • For starters, the data has to be easily digestible by the AI algorithm.
  • Then, the data needs to be cleaned to make sure that outliers are excluded.
  • Also, the data needs to be of high quality and highly relevant to the analysis done by the algorithm, for the results to be of value.

Remember that more data isn’t always the right solution. In fact, it’s now a famous story how IBM’s supercomputer Watson (when used to play “Jeopardy!”) started giving bad results when it was loaded with additional information.

Myth #3 – AI will replace workers whose work is manual, low skill, and repetitive

It’s too naïve to make this as a general statement. Artificial intelligence’s most remarkable use cases and applications, without a doubt, are in areas where they’ve taken away repetitive work from humans. But does it make humans unnecessary? Not really.

In the medical field, artificial intelligence is helping physicians make accurate and early diagnoses by analyzing hundreds of reports and scans in a jiffy and highlighting anomalies that could be missed by the naked eye. Can the same algorithm assess the patient’s profile and recommend the most suitable remedies? No. Can the algorithm’s verdict be treated as the incontestable truth? No. The application is great — to free up the physician’s time by analyzing patient reports, and without being fatigued by the data. However, this alone doesn’t take away the physician’s job, or that of the lab technician, for that matter.

Myth #4: AI is new

AI myths
Pixabay

Of all the AI myths, this is the oldest — because AI certainly isn’t new. The idea of machines emulating the human “way” of analysis, or “learning” over time because of exposure to more data and opportunities to evaluate past outcomes — there’s nothing remotely new about them. In the 1950s, American scientist John McCarthy coined the term “artificial intelligence” and went to on to be a pioneer of the science, for almost five decades.

If you look deeper, there were mainstream movies that explored the idea of “intelligent” machines even prior to the 1950s. The idea appears new because of the way it’s being marketed, and because of the coming together of several business-use cases that have made it much more imaginable and believable.

Myth #5: AI and ML are interchangeable terms

For the layman, it might be all right to call artificial intelligence as machine learning, or vice versa. However, this will not do your friendship with a data scientist any good.

Artificial intelligence refers to the ability of machines to showcase human-like intelligence in performing complex tasks, such as processing and responding to natural language, identifying objects in images, solving complex problems with dozens of variables at play, etc.

Machine learning is best understood as a subset of artificial intelligence, where an algorithm becomes better over time (synonymous with “learning”) by using more training data and by identifying and evaluating patterns within the data.

Machine learning, hence, is one of the many approaches to realizing artificial intelligence in algorithms and hence in machines and software applications that use these algorithms.

More AI myths

While the five AI myths above are arguably the most entrenched, there are some more AI myths doing the rounds in the AI information space.

AI is purely advanced mathematics: To a great extent, machine learning (a subset of AI) uses advanced mathematics, but that doesn’t mean that every AI application depends on such analysis. In fact, as long as the data is clean and contextualized, even simple algorithms can create amazing results that make machines seem intelligent.

AI can replicate human behavior: Though there are efforts being made to achieve this state (cognitive computing), we’re still far from the stage. An artificial intelligence based algorithm is good as the data scientist and programmer involved in its development. It can’t think, in the true sense of the word.

AI will wipe off humans from the planet: Conspiracy theories, anyone? Yes, this may have scared Stephen Hawking, but it shouldn’t scare you. We’re at a stage where chatbots make ridiculous mistakes in processing basic human voice commands. Alexa isn’t coming to kill you, don’t worry.

This is an exciting time to be alive; AI is disrupting industries and helping humans do work quicker, better, and cheaper. That said, AI myths are everywhere, and it’s time you know how to separate them from reality.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top