Everyone seems to be talking about AI these days, about artificial intelligence. On the one hand there is the promise of enormous benefits to industry, society, and individuals. On the other hand there is the doomsday scenario of AI-enabled robots taking over the world.
Then there’s the vast middle ground in between where reality and commonsense reside.
AI will certainly bring benefits in some areas, but its impact in many other areas may be negative or even nonexistent. That’s because there’s really no such thing as artificial intelligence. What everyone is really getting excited (or nervous) about might better be described as intelligent automation, or IA.
But we’ll get to that in a moment. First let’s examine some of the hype around AI especially as it applies to the daily job of the IT professional.
Artificial intelligence in the news
Forbes recently reported about a study that indicated that almost half of all European firms that could be classified as “AI startups” don’t actually utilize AI in any material way for their business. From a business perspective this isn’t very surprising because investors have been throwing money right and left hoping to hit on the next AI unicorn and make a bundle in return. So it’s definitely to the advantage of entrepreneurs to sell their startup as leveraging the power of AI to make their products and services better than those of competitors.
And IT companies aren’t immune to this phenomenon. RSA Conference 2019, one of the cybersecurity industries biggest trade shows, is taking place right now as I’m writing this article. The conference promises a whole slew of cybersecurity vendors touting AI-enabled offerings as the solution to all your organization’s IT security problems. For instance MarketWatch recently made note of a Cisco report that states that “AI and machine learning, used right, are essential to the initial stages of alert prioritization and management.” Essential, maybe; helpful, certainly. But the qualifier “used right” is dead on, because any automated approach to vulnerability assessment and threat detection is only as good as the learning algorithms it’s based on.
In fact the whole idea of using AI to make your company’s cyber assets more secure may actually suffer from a fundamental flaw. Because any cybersecurity tool is only as trustworthy as the company that makes it. So when as respected an industry publication as the MIT Technology Review starts warning that China’s tech giant Huawei is threatening to outstrip U.S. technology companies with regard to AI, both in the area of theoretical developments and practical, marketable applications.
And while most Europeans and North Americans are leery of anything that smacks of Big Brother, especially in the area of new technologies, other parts of the world are embracing new AI-enabled technologies, viewing the personal and social benefits they bring as trumping privacy concerns. In Japan, for instance, a software company called Vaak now utilizes AI to make security cameras capable of identifying potential shoplifters before they snatch items off store shelves. I don’t know about you but I’m a fidgety kind of guy, maybe from drinking too much coffee, and cameras make me nervous. I’d hate to have some robocop in a store roll up to me when I’m shopping and inform me I’m under arrest for planning to shoplift.
So is AI really about implementing human-like intelligence using software? Can a device or app really fool a user and make them believe some hidden human actor is secretly involved in its operation? Of course. But can a machine or piece of software really think like a human being? And is the world headed toward a singularity where intelligent machines take over and mankind becomes displaced, enslaved or even eliminated by machines?
Nonsense. For example the popular understanding of “neural networks,” one of the primary tools used in machine learning, is that they mimic the interconnectedness of neurons in the human brain. But neural networks aren’t designed to imitate the structure of our brains, they’re designed to imitate certain higher-level behaviors our nervous system exhibits such as pattern recognition and decision-making.
In fact wrong ideas like these are founded on a complete misunderstanding of what AI really is while further being inflamed by the hype of AI vendors and tech media. Really when you boil everything down to its bare essentials, the so-called AI revolution happening right now isn’t due of some fundamental discovery or breakthrough but instead it’s the result of the convergence of several recent trends:
- The rapidly growing interconnectedness of the world, especially through the rapid growth of the mobile Internet.
- The ability companies have to harvest vast amounts of data and store it in the cloud.
- The exponential growth of raw computing power available in the cloud.
- Advances in mathematical theory involving game theory, minimax search, multivariate regression, and several other areas all tied together with various pieces of thread (algorithms).
Couple these trends with advances in software automation and the eagerness of entrepreneurs to develop new products and find new efficiencies in existing markets, and you’ve got intelligent automation or IA, which as I said I prefer instead of artificial intelligence or AI (though it doesn’t have as nice a ring to it). Because a network monitoring or security product that utilizes deep learning isn’t any more “artificial” than an app that is built using traditional monitoring and alert algorithms. It may be more intelligent in that it can be automated to recognize and react in more useful ways than traditional monitoring software (for example by distinguishing better between true and false positives) but it’s still artificial and isn’t human-like in any fundamental way. But in one way these recent sensational applications of AI are a bit like the human brain: Their success confounds even those who created the theory behind how they are supposed to work (arXiv paper, PDF
— Warning: lots of math, just read the first two paragraphs).
Real artificial intelligence vs. the hype
Adnan Darwiche in the Computer Science Department of UCLA suggests that we’re overstating things when we say that being able to automate certain tasks like speech recognition, and autonomous navigation, which we previously haven’t been able to automate means that we’ve now succeeded in replicating or mimicking human intelligence using software. In his recent arXiv paper titled Human-Level Intelligence or Animal-Like Abilities? (PDF) he divests AI concepts like neural networks, machine learning, and deep learning of much of their hype by explaining why AI has been so successful of late in performing these formerly difficult tasks. I highly recommend reading his paper if you’re interested in demystifying AI for yourself.
As I indicated above, automated intelligence seems to me to be a more fitting moniker to describe what’s been going on. That’s because most AI advances today are really just fitting tons of data to the curve using multivariate nonlinear regression analysis and then using algorithms in automated fashion to revise the outcome as new data is collected and processed. In other words AI is a mathematical black box expressed in software, and if you feed enough stuff into it you’ll get something interesting. Then if you keep feeding the monster, what you get out is sometimes more interesting that before.
But getting back to applications of AI to things that interest IT professionals, like network monitoring and cyberthreat protection, let me conclude by saying that not all vendors of products and solutions in these areas are playing the hype card. Incorporating AI advances into monitoring and protection solutions does lead to significant gains in some areas, and in future articles I’ll describe some vendors that have been accomplishing this successfully.
In the meantime, however, beware the latest hype, ’cause they’re just out to get your money.
Featured image: Pixabay