AI cyber risks: What to look out for when deploying AI technology

When the average non-techie person hears the term artificial intelligence, what probably first comes to mind is robots as depicted in popular sci-fi movies like “I, Robot.” Imagined that way, it would seem we’re a long way off before AI becomes part and parcel of everyday living. Yet, AI is already more embedded in daily work and home activities than most people realize. Examples include virtual assistants such as Siri, Alexa, and Google Assistant that ordinary people rely on every day to perform basic tasks such as running Internet searches, unlocking doors, and transferring money. The list of AI applications is pretty long and growing every day. The benefits include faster and sharper credit scoring, better disease diagnosis, and enhanced engineering capabilities. AI has certainly greatly improved modern life. Nevertheless, AI comes with certain risks especially as relates to cybersecurity. It’s not incomprehensible that cyberthieves may want to target a bank’s AI-powered customer recognition software or that a malicious competitor would consider attacking a business’ AI pricing algorithm. It won’t be long before AI-driven identity theft and malware kits are freely or inexpensively available on the Dark Web. Generally, AI cyber risks come in two forms. First is the infiltration of legitimate AI programs. The second is the use of purpose-built AI programs that seek to exploit the vulnerabilities in a target organization’s systems. Each of the AI cyber risks we discuss below largely falls in either one of these two categories.

AI cyber risks

1. AI system infiltration can go undetected

To better comprehend the cyber risk of AI, it’s important to understand the modus operandi of an AI system. For instance, machine-learning algorithms (a form of AI system) work by analyzing input and output data then using that knowledge to tweak the system for different circumstances. Ergo, the algorithm learns by doing and refines the process iteratively. As far as cybersecurity goes, this presents two risks. Since the AI system is empowered to make decisions and deductions in an automated way with little to no human intervention, any compromise and infiltration can go undetected for a while.

2. Difficulty in interpreting AI actions

The underlying reason a machine learning program would make certain decisions and deductions won’t always be immediately apparent to system administrators. The machine learning algorithm’s logic can be extremely complex and not readily interpretable or transparent. So even when administrators detect what seems to be a clear violation, the reason for it may remain opaque for a while. That means the violation could be dismissed as simply a glitch in the system even when it’s the result of an attacker’s active efforts at taking over control of the AI system. With machine-learning systems increasingly being entrusted with controlling physical systems, the potential repercussions are immense including death, injury, and destruction of property.

3. AI algorithms are usually freely available

ai cyber risk

AI algorithms are often public and the software frequently open source. They are widely available on the web and fairly easy to use. The same open-source software and libraries that can be used for legitimate purposes can also be a source of vulnerability that would work in favor of the criminal. Just like software-as-a-service has gained traction, malware-as-a-service is a fast-growing area of criminal enterprise that could provide a ready platform for the proliferation of AI security threats. In addition, there’s an unspoken rivalry between criminals in the Dark Web as they battle for the title of “baddest malware ever.”

4. AI can help malware and hackers avoid detection

Whereas a sizeable number of cybersecurity vendors are integrating behavioral analytics, machine learning, and other AI features into their products, the majority of antimalware systems are still heavily dependent on signature-based detection. Attackers can create malware that hides its origin and technique thus making it more difficult for conventional security tools to detect its digital fingerprint. It’s already possible for one to purchase tailor-made malware on the Dark Web that can evade detection by the leading antiviruses. An AI malware kit can add stealth that ensures it’s always a step ahead of updates to antivirus software and other defensive systems.

To put the magnitude of the threat in perspective, think about botnets. This is a network of thousands of devices that is under the direction of command-and-control (C&C) software that directs them what to do. Botnets are a powerful weapon often leveraged in the execution of DDoS attacks.

Now imagine a botnet that’s under the control of an AI algorithm that gives it substantial autonomy from human direction. It would have the ability to keep track of the attacks that are working and those that don’t to improve its own effectiveness. It could tailor its attack based on the most viable vulnerabilities it encounters on a target network. As opposed to a monolithic attack that acts the same irrespective of its target, the AI botnet’s self-direction would mean each target and task is tailored to the unique circumstances that surround it. This allows it to penetrate and compromise more hosts.

5. AI can lead to complacency

AI can also jeopardize cybersecurity in a subtler way. As more organizations adopt AI and ML products for their security infrastructure, the takeover by machines could create an illusion of security that lulls IT and InfoSec professionals into complacency. Given the potential dangers posed by AI tools, this could be a catastrophic mistake.

If anything, ramping up AI use should go hand in hand with ramping up security. No AI-based cybersecurity solution will ever be 100 percent foolproof. Therefore, AI should only be an addition to an organization’s existing security framework and not a replacement for the basic controls required to keep criminals and malware at bay.

6. AI can accelerate cyberattacks

Blockchain cybersecurity

AI can grow the speed, resilience, and success rate of a cyberattack. The most time-consuming activities in preparation for a cyberattack such as sifting through enormous volumes of data can be performed at machine speed without the to pause for breaks. The capacity to quickly interrogate unstructured data would allow AI malware to detect links that are invisible or near indecipherable to the human eye.

Since the algorithms are self-learning, they can become smarter with each failure and thus tailor each subsequent attack to the new knowledge they’ve acquired. Hackers can automate exploit writing and vulnerability identification. It may eventually be possible to build AI algorithms that predict a target’s response and execute in a way that avoids triggering defense mechanisms.

Harness the benefits while mitigating AI cyber risks

To boost their defenses against AI-powered cyberattacks, businesses have to adopt a two-pronged approach. First is shielding their own AI tools from attack. Second is protecting both their AI and non-AI digital assets from an AI-powered attack. Companies must evaluate how AI is used in the business then develop specific controls to mitigate the risk of an attack. New technologies like AI and ML are often a double-edged sword. Whether they become an asset to a business is contingent on the company’s ability to harness the benefits while mitigating the risks.

Featured image: Pixabay

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top