Google Glass & Invisible Data

google glass

 

Google Glass, the wearable computer with an always-on video sensor and over-eye display, freak a lot of people out. There are very real privacy issues that come into play when anyone has the ability to take photos or video surreptitiously, and (more importantly) doing so becomes normal and acceptable in society. Judging by the number of people trying to preemptively ban the device, society is definitely not ready for this to become normal. However, notice that I called it a video sensor, not a video camera. My point is that while someone probably could use the technology for evil ends, I’m not convinced it’s actually supposed to or designed to do these things that everyone is so afraid that it will do.

On the far end of the spectrum are life-loggers–people who embrace the idea of recording and sharing everything. We’ve been doing some experimentation at the IPG Media Lab with POV video–basically video cameras strapped to your head. I tried out a bunch of wearable video recording eye-glasses and ear-pieces. These cameras are legitimately video cameras, because they have two states: 1. off, and 2. recording video which is saved to memory for later viewing. A lot of people use these wearable cameras (like the Looxcie) to broadcast their experiences in real-time. Unless they choose to turn the thing off, they are never not recording whatever is happening in front of them. Trying this device out, I felt like publicly life-logging video-streams that live forever on the net is an annoyance at best, and at worst, an inescapable panopticon of intentional and accidental junk shots. “No, thank you,” I thought. After this experience, I was really skeptical about the chances of Google Glass ever succeeding.

But, something clicked for me recently as I was reading Robert Scoble’s blog about his experience wearing the device, and I realize I’ve been thinking about it all wrong. What if the camera end of the device is not intended to be constantly recording video or creating a photographic record, but is simply acting as a sensor to create a record that is 99% actionable data, and only 1% saved images? The average single-frame picture has over 600,000 bits of data. A video taking 15 of those per second stacks up gigs of data very quickly. These human-viewable images are data-overload for both humans and machines. Computers don’t need nearly this much information to make a decision, and humans would need two life-times to create the video and then view it all. A camera-based sensor can scan all those pixels for meaning in real-time, but it will dump the image from memory and keep only the information it needs. In this way, it actually acts a lot like your human eyes and brain memory–keeping the important moments containing information it needs, and discarding all the long dull bits between those moments, the information it does not need.

I think there’s precedent for a device like this gaining societal acceptance. When the TSA started using a full-body scanner, there was a brief uproar about the idea that TSA employees were checking out our naughty bits, yet once people saw the lumpy humanoid images that the scanner actually creates, the furor died down. Similarly, before Nintendo released the Kinect, nobody could fathom why you would want to subject yourself to a video camera in your own living room. Once people figured out that the device isn’t really meant to take video or pictures of your living room, it’s simply meant to figure out where objects are in the space directly in front of it, people quickly got over it. You could even look to gmail for an example. When they first released this email that was “free” and offered an unheard of amount of free cloud storage, but the “price” for this product was letting google scan the contents of your email and serve you ads based on what was written in your emails, critics scoffed that such an invasive product would ever catch on. Clearly, they were wrong. What all these products have in common is that they are able to handle the most sensitive data in our lives, yet do not judge us or betray us, the users.

I firmly believe that Google is crafting a data architecture for Glass that does not betray the user, because they do not want to be in the morality business. I think the data these devices collect could potentially be incredibly damaging, but if the data is stored and parsed in such a way that damaging facts can be hidden, and the user controls what data is used vs. not, the device will quickly become a trusted extension of one’s own self, much in the way smartphones already have. The default setting is to save all pictures and video recordings to a private area of Google+, where the user must choose to make it public. Google glass doesn’t really do much that the smartphone doesn’t already do, it simply gives smartphone addicts a new, faster, more immersive way to interact with data about the world around them. The real innovation here is not Google Glass, the innovation is, was, and will continue to be the incredibly powerful way that apps can augment your experience with the physical world by using real-time, passively collected data. Glass is simply a better app interface.

One of the more interesting apps I’ve been playing with lately is called Place Me. If I turn it on, it politely runs in the background, passively listening to my phone’s GPS data, then matching those points in space against a database of places, then providing me with a list of places and businesses it thinks I’ve gone to, and a tally of the hours I spent at each place. It’s usually pretty accurate, and provides an interesting way to quantify my time, over time. This sort of data could be used in very useful ways by marketers, and super-charged by a device like Google Glass. For example, “Hey Tim, we noticed you eat lunch at Plant on the Embarcadero a lot. Other people that work near you and like Plant have been going to this other new restaurant. Here’s a 10% off coupon for lunch today. Try it out and let us know if our hunch was right.” That sounds a lot like the kind of advice I would value from a friend, but suddenly I could get that from an automated system. As far as I can tell, that is what the future looks like.

Glass, I should point out, does not have GPS built in (yet), and much of its functionality lies in tethering to a smartphone (Android, of course) that does have GPS and a keyboard. The proponents of Glass love that it’s operated by voice-command, but the idea of loudly prompting my Glass device at work or dictating my emails while walking down the street makes me cringe. I already stop and stand on the side of the sidewalk to type texts rather than use Siri. I will definitely kick the first person who randomly starts yelling “OK, Glass…” out of any meeting I’m running. So, while it does do a bunch of cool stuff on it’s own by soaking up ambient information and feeding the user data, for that user to interact with the world or do much with this information, I expect to see a lot of tethering and dual-device usage.

While I’m bullish on apps and helpful data collected from passive sensors like Glass, I’m still not totally sure that Glass will actually wind up being a successful consumer device (unless, as Scoble points out, they make it really cheap, like <$200). A friend of mine calls them “Segway Glasses,” because he thinks they’re so nerdy looking that only rich guys, rent-a-cops, and tourists would be willing to be seen in public using them. I think he might be right. However, I really do believe that the future will be full of apps that act like 99-cent sixth-senses, and the sensors that feed data to these apps will grow smaller and smarter, and in time, become normal.