Too much information? Why your sensor data may be going social

It isn’t exactly a big secret that our phones, PCs, and tablets produce a lot of data. We all create data on a daily basis in the form of email messages, text messages, photos, videos, and the list goes on and on. At the same time though, our smart devices produce other types of data that most people never see, at least not directly. I’m talking about sensor data.

Modern smart devices are equipped with numerous sensors that allow them to do all sorts of amazing things. Even the most basic smartphone or tablet probably includes:

• A photo sensor
• Camera lenses
• One or microphones
• A touch sensor
• GPS
• An accelerometer
• A magnetometer
• A barro sensor
• A gyroscopic sensor
• One or more thermistors

Of course, modern, high-tech devices often include a data sensor array that goes far beyond the relatively modest list outlined above. These and other sensors are what allow the apps on our phones to do all of those amazing things that we have all become so accustomed to.

A lot of data

sensor data
It is only on the rarest of occasions, however, that the average person thinks about the raw data that is produced by these sensors. When I was taking aerobatic flying lessons, for example, I remember after one flight wishing that my phone had been in my pocket so that I could show a couple of friends the 7G pull that surely would have been logged by my phone’s accelerometer. Beyond that, though, my phone’s accelerometer data isn’t exactly the sort of thing that I think about on a daily basis.

Somewhat surprisingly, however, the rather mundane sensor data that our phones and other smart devices are constantly registering might eventually be put to use by the social networks. While I seriously doubt that Facebook or any of the other social networks have an interest in anything that I might have done while flying an aerobatic aircraft, our phone’s raw sensor data can paint a rather detailed picture of our lives.

Now I realize that right now some of you are probably reading this and thinking to yourselves that this is old news. After all, social networks have been using our phone’s GPS data for years. But so far the social use of GPS data has been somewhat limited. A social app might, for example, detect that you are at a particular restaurant, tourist hot spot, or other public venue and ask you if you want to post about it. However, according to a recently filed patient application, next-generation social apps may take the use of GPS and other raw sensor data to the next level.

The patent application referenced in the previous paragraph is a bit technical, but the essence of the patent is that a social network might be able to make friend suggestions based on a person’s real-life interactions with others.

Just for the sake of clarity, I’m not talking about a social media app that recommends friending someone who you work with or someone who you hang out with on the weekends. Social networks have been able to do that for years. I’m talking about detecting brief but common interactions with someone who is basically a stranger.

Let me give you an example. I am one of those people who goes to the grocery store almost every day (at least that’s the case when I am not traveling). In spite of the frequency with which I visit the store, I don’t have any sort of relationship with any of the store’s employees. Aside from their first names, which are pinned to their uniforms, I don’t know anything about them. Even so, there are several employees that I have minor interactions with on a regular basis. The girl who was working the cash register last night, for example, has probably rung up my groceries dozens of times. Similarly, another employee in the deli made a vegetarian sandwich for me, just as she has done many times in the past. These are just two examples of people who I interact with on a regular basis but have no real connection to.

So with that in mind, let’s go back to the subject of a phone’s sensor array, and how the data created by those sensors might be put to work by the social networks. Imagine for a moment that I have my phone on me every time that I go to the grocery store and that I have a particular social networking app installed on my phone. Let’s also suppose that the grocery store’s employees also have their phones on them while they are working and that they also have the same social networking app installed on their phones. What might the social networks be able to derive based on raw sensor data from the phones?

Going beyond GPS data

sensor data
Shutterstock

GPS data could obviously be used to tell the social networks that I and the store’s employees are in the same place. Other sensors, however, could theoretically be used to gauge not only my physical proximity to individual employees but also the amount of time spent in close proximity to a specific employee. If the software were to determine that I spend two minutes with a particular cashier several times a week, the software might infer that I know the cashier and make a friend suggestion based on that inference.

One thing to keep in mind, however, is that modern smartphones are equipped with numerous sensors and have capabilities far exceeding simple proximity detection. Just imagine how accelerometer data might be used. A social networking app might compare the accelerometer data from two people’s phones to see if they are engaged in the same activity as one another. If two people are jogging for, example, the accelerometers in their phones would produce similar data. The same might hold true if both people were riding in a car, or even just standing around doing nothing.

This brings up another important point that I have yet to hear anyone talk about, and that is the way that an AI engine might be used to use sensor data to determine exactly what someone is doing at a given moment. Admittedly, I didn’t take the time to read the entire patent application, so this may have been in there. I didn’t see it if it was, though.

At any rate, imagine for a moment that a social networking company wants to learn more about its user’s daily routines. A good way of doing that might be to examine sensor readings that have been recorded under controlled conditions and then use that data to train an AI engine.

Let’s use the act of jogging as an example. A social network might get a group of volunteers to put a phone in their pocket as they go for a jog. Upon the volunteer’s return, the phone’s sensor data is analyzed by an AI engine that essentially learns what a jog looks like. The AI engine might analyze the vibrations that were picked up by the accelerometer, or the speed and distance that were logged by the GPS. With a large enough sample size, the AI engine could learn what a jog “looks like” and would then be able to spot signatures within phone sensor data indicating that the phone’s owner was jogging.

By having that information, the social network might sell targeted ad campaigns to gyms, displaying the ad to those who are known to stay fit. The same data could also be sold to insurers who might be inclined to set premiums based on lifestyle choices. The social network could make this type of data recognition more palatable to its users by offering to connect them to other people who jog on the same route at similar times of the day. Of course, I am only using jogging as an example. A sensor data signature could theoretically be created for any number of activities.

Sensor data doesn’t always make sense

I will be the first to admit that I’m not particularly keen on the idea of the social networks collecting even more data and analyzing every aspect of my daily routine. Even so, as a tech geek, I have to confess that I am fascinated by all of the ways that seemingly mundane data can be put to work. The more mischievous side of me also finds it amusing to think of the ways that such a data collection effort might be confused by parabolic flight.

As a commercial astronaut candidate, I regularly participate in parabolic (zero gravity) flights for training and research purposes. On several occasions, I have seen tablets and phones lose their ability to hold the screen in the correct orientation while weightless. The device’s accelerometer registers 0G in all directions, and so the device literally does not know which way is up. I can only imagine how much something like that might confuse a social network that is monitoring sensor data, especially given that several co-workers are always on those flights with me and their phones would also be registering “invalid” data that matches the data that my phone would be producing.

Featured image: Shutterstock

Leave a Comment

Your email address will not be published.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top