Inside the black box: Is technology becoming too complicated?

Let me tell you about a car that I used to own. The make and model isn’t important. What is important is that at the time that I bought it, it was one of the most technologically advanced hybrids on the market. In fact, it was all of the high-tech gadgetry that first attracted me to the vehicle.

For the first couple of years that I owned it, everything was fine aside from a few quirky bugs in the electronics. The reason why I got rid of the car was because those seemingly innocuous electronic gremlins not only became more prevalent but also dangerous. There were a few different incidents in which the car put itself in gear while I was sitting in a parking space. Thankfully, I always managed to hit the brakes before anything bad happened, but things could have very easily turned out differently.

I can accept that cars, like any other machine, require periodic maintenance and sometimes break down. What bothered me about this particular problem (aside from the obvious) was that the mechanics at the dealership couldn’t tell me why it was happening, and essentially just dismissed the problem as a computer glitch.

To the average consumer, a modern vehicle is essentially a black box system. A black box system is a system in which inputs and outputs can be observed, but the way in which those outputs are created is an unknown. In older vehicles, the gas pedal was tied to a mechanical linkage that would allow more fuel to flow into the engine. In modern cars, pressing the gas pedal sends a signal to a computer, and the computer makes a determination as to how much fuel to send to the engine. While this process might not seem all that mysterious, there is not necessarily a linear relationship between the throttle position and the amount of fuel that is sent to the engine. Other factors such as the currently selected driving mode (sport mode, eco mode, etc.), the vehicle’s altitude, and the outside temperature can all play a role in how much fuel the computer decides to send to the engine.

Of course, the idea of a vehicle’s fuel system being a black box isn’t necessarily a problem. You don’t have to be a mechanic (or a computer engineer) in order to drive a car. What might be a problem, however, is if the engineers who designed a car do not know how the fuel system works.

black box

Engineering black boxes

On the surface, this idea of an engineer not understanding their own creation sounds like something from a bad comedy sketch. However, engineering black boxes are probably much more common than most of us realize. There are at least four reasons that I can think of why engineers may not understand how their own creations work.

1. It was invented by accident

One reason for the existence of engineering black boxes is because something may have been invented by accident. History is filled with stories of accidental inventions. The microwave oven, for example, was invented by an engineer at Raytheon who was working on a new type of radar. When the engineer stepped in front of his device, the candy bar in his pocket melted, and the rest, as they say, is history. Another example of an accidental invention is aspartame (an artificial sweetener that is commonly used in diet soda). Aspartame was discovered when a chemist who was creating a substance for use in evaluating an anti-ulcer drug accidentally licked his fingers.

I myself have created things by accident on occasion. When I was a kid, I was really into building homemade electronic gadgets. Back when I was first getting started, I was trying to build a high gain microphone, only to discover that it effectively functioned as an FM radio. At the time, I didn’t understand why my device was acting as a radio receiver. It was a total accident.

2. The device has a black box of its own

A second reason why a particular technology might act as a black box is because it has been built on top of a black box. A game developer, for example, might leverage middleware as a way of simplifying the development process. Because the developer did not create the middleware, they might not fully understand every last nuance associated with its use. That doesn’t mean that the game won’t work as intended of course. It only means that the developer probably does not have a completely granular understanding of every last instruction that is being sent to the CPU.

Homemade electronic projects might also contain black boxes. The electronic projects that I created when I was a kid were truly component level builds. I couldn’t even begin to guess how many transistors, capacitors, and diodes I soldered in my youth.

Today, however, most do it yourself electronic projects seem to revolve around the use of microcontrollers such as Arduino or Raspberry PI. These microcontrollers can be thought of as a black box. I have built several Arduino-based devices that did not initially function as planned. I then had to go back and figure out whether I wired something incorrectly, made a programming error, or if the microcontroller simply did not feel like playing nice (which is a common problem with some of the generic Arduino clones).

3. A lack of testing

black boxA third reason why a technology might act as a black box is because of inadequate testing. I’m sure we can all think of examples of products that were rushed to market in spite of their many flaws. Enough said.

4. Mathematical complexity

Perhaps the most prevalent (and most disturbing) reason for technologies functioning as a black box has to do with mathematical complexity. It is becoming increasingly common for devices to rely on interwoven algorithms that are so complex that it is impossible for humans to fully interpret them.

I will be the first to admit that this idea sounds ridiculous. Logically, it would seem that if a human is smart enough to create an algorithm, then a human should be smart enough to predict the algorithm’s output based on its input. However, there comes a point at which there is just too much complexity.

There is an entire branch of mathematics that revolves around something called chaos theory. At its most basic, chaos theory states that a system’s output can vary wildly depending on its initial condition, thereby making the output impossible to predict. Let me give you an example.

Imagine for a moment that there is a pile of sugar sitting on a table and that you decide to drop one more sugar crystal onto the pile. Particle physics should be able to accurately model the motion of that sugar crystal. Ultimately, however, it is impossible to predict with certainty what will happen when the sugar crystal hits the pile because the existing sugar pile is essentially an unknown. It is made up of thousands of individual sugar crystals in random positions. Any one of those crystals could potentially impact the crystal that is being dropped, thereby altering its trajectory in an unknown way.

So with that in mind, consider some of today’s AI systems. The more advanced AI systems do not depend on a single algorithm, but rather multiple advanced algorithms, all of which feed into one another. When you also consider that input can be simultaneously generated by thousands of individual sensors (such as would be the case for a driverless car), it quickly becomes apparent why it is impossible to predict the output of a complex set of algorithms in a highly dynamic environment.

black box

Seeing inside black box

Engineering black boxes are nothing new. They have always existed, especially with regard to things that have been invented by accident or technologies that were completely experimental. Even so, there is one very key difference in the state of technology today. In the past, it has always been possible to study black boxes until their behavior could be completely understood. For the first time in history, we have reached a point at which it may not always be possible to know for sure why something works the way that it does.

My guess is that this concept is going to lead to some difficult choices. One option will be to simply accept that technology is becoming far more complex and that we aren’t always going to be able to understand why devices do what they do. In the case of a driverless car, for example, this might mean taking a leap of faith and determining the vehicle to be safe based on thousands of hours of testing, even though some of the vehicle’s decisions might seem counterintuitive.

On the other hand, it may become necessary for engineers to come up with a way to make an AI explain itself. Suppose, for example, that an insurance company (which operates in a heavily regulated industry) uses an AI algorithm to set insurance premiums. There will undoubtedly come a point at which customers, regulators, and even investors will demand to know the rationale behind those premiums. At that point, someone will need to be able to ask the AI why two seemingly similar customers are paying vastly different rates. Simply saying “that’s what the computer came up with” isn’t going to satisfy anyone.

Featured image: Shutterstock

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top