Chatbots gone wrong could doom your business

It’s tough being a decision-maker in the IT department, particularly for those expected to sign off on purchase orders for new technologies. So, what will you do when you’re pitched the idea of investing in the latest tech startup that promises to deploy AI-powered chatbots? Or, if you’re a digital marketer, how do you assess whether it’s worth investing money in a chatbot? We know, it’s difficult. Vendors will tell you everything right about chatbots. This guide will tell you something that vendors won’t: What happens to your business with chatbots gone wrong.

Develop chatbots

Cutting through the marketing hype is difficult in itself. For chatbots, decision-making becomes more complex, because of too many unknown forces in play, too little clarity on sustainable business use cases, and almost complete absence of in-house IT capabilities to develop chatbots that deliver enterprise-wide benefits. It won’t be amiss to add that, at this stage, chatbots are more an experiment than a proven driver of business advantages.

All that aside, let’s also agree that in the coming times there will certainly be more clarity and decision-making will not be so difficult. Till then, we’d advise you to take every opportunity to understand the possible risks and challenges associated with this technology.

Rogue bots

Chatbots gone wrong

They’re witty. They’re smart. They look as authentic as the most authentic chatbot you ever saw. They’re sexy as well! But they’re rogue bots. Just like enterprising and constructive thought led to the creation of chatbots, malicious and destructive thought (read cybercriminals) led to the creation of chatbots with negative intent.

These chatbots gone wrong are programmed to initiate and sustain conversations with users with the intent of misguiding them to malicious links. Also, rogue bots can convince users to share their account login information, credit card information, and personal information. These rogue bots can launch any kind of phishing attacks.

For anybody with a limited experience of interacting with chatbots, it can be very difficult to differentiate good bots from bad chatbots gone wrong. Marketers and software vendors will need to invest a lot in setting norms for the kind of info a bot can and should ask for. For the time being, just know that you’ve not heard the last of rogue bots.

Risk of open Internet protocols

It’s one thing that chatbots are being hyped as the next big technological revolution that will touch business in unimaginable ways. It’s entirely another thing that chatbots, at least as of today, use basic open Internet protocols for data exchange. These platforms can be targeted relatively easily by hackers, and this essentially makes chatbots a big valuable target for cybercriminals. Particularly in the finance sector, the risks of chatbot conversations being hacked into are significant. One method to ensure security is to make sure that all data transmissions take place through reliable HTTPS protocols. For highly sensitive applications, another reliable technique that makes chatbot data transmissions secure is Transport Level Authentication (TLA). So, make sure that robust security becomes the first line of inquiry you adopt while discussing chatbots with any potential vendor.

Potential for confusion

chatbots gone wrong

Chatbots are expected to simplify conversations between humans and machines. However, human behavior is anything but predictable and simplistic. Throw in the fact that the chatbot programs are coded to deliver crisp and goal-focused answers, and you will understand how a conversation meant to take the user to point A will take him to point X instead. As a result, in several applications, the end result could be user attrition! For instance, a chatbot meant to help a user find the perfect shirt could very well end up suggesting overpriced or oversized shirts, which could push users off to other e-stores. A manual stock check is much likelier to help the user find a good shirt. That’s the potential downside of technologies where human control is essentially compromised on.

Chatbots will kill menial jobs

Truth be told, it’s highly likely that chatbots will take over a major chunk of jobs in the enterprise that are repetitive in nature and where the range of conversions can be bound in tight boundaries. Customer support is right up there in the risk register. From a business perspective, this means cost-cutting. However, from a job-market perspective and consequently from an HR-industry regulations perspective, this space is worth keeping an eye on. It’s likely that government regulations and public outcry will tamper the pace of chatbots that threaten jobs. This will create unique industry and governmental relationships management challenges for multinational business.

The Turing challenge

If you’ve never heard about the Turing Test, know that it’s among the most reliable tests made to gauge the intelligence of machines. The obvious question you’re about to blurt out is — how do chatbots fare? Well, not many tests have specifically been done for chatbots, but AI-powered machines, in general, have been subject to this test. Unfortunately, most of the tests have shown that these technologies are not as intelligent as humans. The implications of this are already visible in how chatbots are being made with gateways in mind to let human executives take over. This can significantly inflate the payback periods of chatbot technology for several SMBs and enterprises.

Chatbots gone wrong could mean a business being hated

Even a company as technologically advanced and powerful as Microsoft can suffer the consequences of chatbots gone wrong. In March 2016, Microsoft’s Tay chatbot created a sensation — but for reasons far from what Microsoft would have anticipated or hoped for. Tay made some distasteful tweets on the lines of “Hitler was right.” Though it’s not right to judge the effectiveness of chatbots from such instances, this certainly calls for caution from business contemplating the idea of using chatbots for broadcast media. That’s because reputations take surprisingly little time to be trampled if the misdemeanor is linked to racism or sexism.

Chatbots promise a lot and some of them have even started delivering results. However, the marketing engine is working overtime and blurring the understanding of chatbots for people to whom they matter. This guide has brought up some of the potential downsides that you’ll never hear a software vendor acknowledge.

Photo credit: Shutterstock

About The Author

2 thoughts on “Chatbots gone wrong could doom your business”

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top