AI algorithms: New frontier or legal quagmire?

I’ve been thinking a lot about algorithms and artificial intelligence lately and how AI-enabled technologies are going to affect our lives, our societies, and the world around us. Whether it’s Facebook data-mining every last bit of our personalities and behavior, or self-driving cars blissfully carrying us into a stress-free future, algorithmic computing and its asymptotic goal of “intelligence” has been frequently in the news these days.

Tech Bubble 3.0?

My first reaction to the flood of so-called technological “advances” (many of which are still dreams) is that we’re entering the growth period of another tech bubble. The first bubble popped in 2000 when the NASDAQ peaked and began its precipitous decline, draining away the lifeblood of aspiring dot-com businesses and the savings of millions of investors.

The second bubble has been fizzling more slowly since the Great Recession hit in 2008. In fact, that bubble has been popping so slowly that many feel we’re still not even in a bubble as far as Internet media companies like Facebook and Twitter are concerned (Facebook even tried to deny that they’re a media company but eventually gave in).

And while many tech pundits identify the notoriously insecure and mostly unwanted Internet of Things (IoT) as the Next Big Thing (with bubble-like growth forecasted by numerous sources which has fortunately been debunked by the notoriously sensible IEEE) it seems the tech news media have already become somewhat bored with that topic and are now touting AI-com-algorithms as The Thing To Watch. That to me can only mean one thing: Bubble coming!

Promises, promises

AI algorithms: New frontier or legal quagmire?
Flickr / smoothgroover22

Whether you call it artificial intelligence, deep learning, machine learning, big data, or simply algorithms, technology companies are jumping like grasshoppers on the bandwagon of trying to figure out how to reap the rewards from being first movers in such innovations. Self-driving cars are one notable example that’s been steadily gaining ground in the real-world marketplace. But there are many other areas where AI is threatening to make inroads that while bringing benefits may also result in significant disruption.

Diagnosing medical conditions is one area where algorithmic computing promises to be more accurate than human thinking; The New Yorker has a long article on this topic that’s well worth reading and suggests “knowing together” as a goal to lead towards ultimate benefit. That idea of human-machine cooperation (yes you can think cyborg here) highlights something basic about AI, namely that it works both ways — human creates intelligent machine to make life easier and then intelligent machine begins to change how human thinks and acts. Returning to driverless cars, for example, this article on TechSpot describes some research into autonomous vehicle algorithms that supposedly will result in traffic jams becoming a thing of the past. Well, if we tweak the algorithms to improve machine efficiency, might not that have a blowback effect on the behavior of us non-machines involved in the same scenario?

AI promises also to make flying and riding trains and buses faster and less dangerous by using algorithms to predict delays or even eliminate the need for pilots or drivers. Algorithms also promise to make the legal profession more efficient by automating much of the work that lawyers and their researchers and assistants do. AI is also beginning to encroach on the financial services sector by enabling not just algorithmic trading but also AI-generated investment counselling and advice.
Algorithms in law enforcement promise to make “Minority Report”-style crime prediction a reality, at least in some degree. And algorithms are even affecting us in the IT sector, especially in customer support where chatbots are now handling level 1 requests for many companies. Eventually, most IT pros will be affected in one way or another.
Why all the rush? It’s obvious: to lower costs. Why pay for a paralegal when a cloud service can do a much better job faster? Who will benefit? Consumers, ostensibly, but in reality, it will be mostly big companies like IBM and Google who are heavily investing in AI. What will the world be like once everything is algorithmitized? Who knows!

Should AI be regulated?

The question that I struggle most in this flood of innovation is the issue of regulation. Where should the lines be drawn? And who should draw them? Should governments step in and become involved? Should algorithms be open for auditing to help determine liability when death, injury, or property damage is attributed to an AI-enabled system?

Europe seems to be taking the lead in this regard, for Article 15 of the European General Data Protection Regulation (PDF), which takes effect a few months from now, stipulates that EU citizens have the right to meaningful information about the logic involved when an automated system makes an algorithmic decision concerning them. And Article 22 says that citizens have the right to choose not to be subject to automated decision-making processes whose decisions may significantly affect their wellbeing in any fashion. I understand this to mean that companies like Google or Telsa, which create systems or offer services that involve algorithmic decision-making, will be required to allow citizens to opt out of being managed or controlled by those systems/services. And it also seems to me that it means such companies will need to become a whole lot more transparent about exactly how their algorithms work — something they may be loath to do since they would likely prefer to safeguard the secrecy of their algorithms as “intellectual property.”

AI algorithms

The United States, on the other hand, is likely to allow more free reign to how AI companies innovate. On the one hand such freedom is part of helps drive innovation in the U.S., but until the dust settles, the legal quagmire that results is likely going to be nothing short of painful. And realistically, it’s no longer U.S. companies that drive such innovations but large multinationals doing most of their manufacturing in China.

The bigger issue really is one of liability. For example, when a part in a car fails and causes an accident because the part was defective, current laws enable liability to be placed on the manufacturer of the part. So if you injure someone because your brakes failed due to a defect, you would likely not be held liable (though you might have to go to court to prove that). But if your autonomous vehicle behaves irrationally because of a defect in an algorithm, current laws in the U.S. make it difficult to assess who is liable since algorithms are generally not publically available for inspection. This means that without significant changes in U.S. laws, the mass introduction of algorithmic systems in transportation, medicine, manufacturing, and finance is likely going to lead to many more difficult and costly court cases to try and build up an aggregate of case law to replace missing regulatory frameworks.

And who will benefit from such a flood of court cases? Lawyers, obviously. But if the lawyers get replaced by algorithms then … well, what?

Photo credit: FreeRange Stock

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top