Although biometric devices such as facial recognition and fingerprint scanners once existed only in Hollywood films, such devices not only exist today but have become somewhat mainstream. My Microsoft Surface Book 2 device, for example, uses facial recognition-based authentication rather than requiring me to enter a password. Now that facial recognition technology has matured to the point of being reliable, I can’t help but wonder what is next for the technology. After all, nearly every technology evolves as new use cases are discovered. When it comes to facial recognition technology however, I think that hints as to where the technology is headed might actually lie in the past.
Before her untimely passing, my sister was something of a photography junkie (so am I for that matter). She always had the latest camera and quickly figured out how to use all of its various features. Back around 2007, give or take a couple of years, my sister showed up at a family gathering with a camera that supported facial recognition technology. I can’t remember the make or model of the camera, but I remember her telling me that once the camera learns what someone looks like, it was able to automatically tag the people in each picture.
The thing that I found to be more interesting about the camera’s facial recognition capabilities, however, was that the camera was smart enough to look for things that might otherwise ruin a photo. According to my sister, the camera could tell if the person who was being photographed wasn’t smiling, or if they had their eyes closed.
I never had the chance to take my sister’s fancy new camera for a test drive, so I have no way of knowing how well its various capabilities worked. Even so, the camera’s features prove one thing beyond any shadow of a doubt. Even way back then, the tech industry was already looking beyond using facial recognition purely as an identification or authentication technology. There was interest in using the technology to detect emotion.
As it turns out, cameras that use software to figure out when someone is smiling was really just the beginning of using facial recognition software to detect emotion. According to the Associated Press, Amazon has developed facial recognition software capable of detecting basic emotions such as fear. The article also mentioned that the Washington State Police are already using the technology amid backlash from those who are concerned about how such a technology might be abused.
Personally, I kind of have my doubts about the reliability of software that is designed to detect emotion based on a person’s photograph. I say this because of a personal experience. I’m not the most photogenic person in the world, and I couldn’t tell you how many times someone has snapped a photo of me while I have a goofy expression on my face. I just can’t imagine how some of those photos might be interpreted by software. But let me give you another example.
Below is a photograph that I took of one of my cats when he was a kitten. As you look at the photograph, what would you consider the cat’s emotional state to be? I can’t speak for anyone else, but I think that the cat in the picture looks really aggressive. His body posture and facial expression suggest that he is perhaps about to attack someone. In reality, though, the picture was taken a few seconds after my cat had woken up from a nap. He was in the middle of a stretch and a yawn when the photo was taken. So, although software designed to detect emotion might describe my cat’s mental state as angry and aggressive, his real emotion was sleepy. With that in mind, just imagine what the potential consequences could be if the cops were to rely on software that is designed to assess a person’s emotional state.
Even though tech companies are working on ways of using facial recognition technology to assess emotion, my guess is that the bulk of the work being done on facial recognition technologies at the moment is probably related to combining the technology with other object recognition technologies.
Human faces are not the only types of objects that computers can recognize. For instance, we have probably all seen those annoying speed trap cameras that are designed to read license plate numbers. But imagine the possibilities if a computer could perform several different types of recognition at the same time. Here is a somewhat simple example.
Someone in my family owns an off-road vehicle that they use primarily for farm work. The vehicle’s manufacturer equips their off-road vehicles with a governor that is tied into the driver’s seatbelt. If the driver neglects to wear their seatbelt then the governor limits the vehicle’s speed to roughly about 10 MPH. For whatever reason, the person who owns the vehicle hates to wear a seatbelt, especially when they are driving on their private property. To circumvent the vehicle’s governor, this person has gotten in the habit of fastening the seatbelt behind the driver’s seat. That way, they can drive as fast as they want, and don’t have to be bothered by wearing a seatbelt.
The reason why I mention this particular example is because vehicle manufacturers are already working on using facial and object recognition technology to monitor drivers. Some of the nearly self-driving cars, for example, watch to make sure that the driver does not take their eyes off of the road. It doesn’t seem like such a stretch to think that such a system could be used to verify that the driver is actually wearing a seatbelt, rather than fastening it before they even get into the vehicle. Similarly, I have been hearing rumors that one of the motorcycle manufacturers is thinking of using similar technology to verify that riders are wearing helmets.
In the next five years or so, I suspect that there will probably be similar technologies employed in the workplace. Imagine for a moment how the webcam on a user’s laptop might be used to improve employee productivity. Initially, such a system might monitor a particular user’s facial expressions to look for signs of fatigue, and then encourage the person to go get some caffeine when necessary. Over the longer term, the system might be adapted to learn how the user actually performs job-related tasks. A backend AI system might then analyze the user’s activities and help find ways to make the employee more efficient.
While there is little doubt in my mind that facial recognition software and backend AI systems can be used to improve health, safety, and productivity, I also think that the tech industry needs to be extraordinarily careful with how it allows these technologies to be applied. Otherwise, we could quickly find ourselves living in a world in which someone is constantly looking over our shoulder, scrutinizing every decision. As if that idea isn’t frightening enough, I can even imagine an automated system (such as one designed to detect seatbelt use) actively contacting law enforcement if it detects a violation. Personally, I have no desire to live in a world in which my gadgets can call the cops any time that they think I am doing something wrong.
Featured image: Shutterstock
Docker and Microsoft have rolled out a new and easier way for developers to deploy…
The verify domain error when registering the same domain in Office 365 to a different…
When it comes to the bulk management of Hyper-V hosts (or of any Windows server,…
The Shiny Hunters hacking group has struck again. This time they hit meal-prep delivery company…
Specops uReset is an Active Directory password reset solution to handle the problem of forgotten…
According to several reports, eBay may be port scanning visitors to its site. While this…