The limiting factor in furthering our understanding of how stuff works is often our own biology. We need technology to bridge the gaps our brains aren’t designed to manage. Our brains are amazing at doing certain things, like filling in perception/knowledge gaps with memories and best guesses, but ask someone to listen to two songs simultaneously or make a decision based on more than twenty simultaneous variables and you start to see the brain break down.
For the last few hundred years, media technology has mostly been focused on supplying people with information like facts, figures, stories, parables, and other linear forms of information that are easy for a normal brain to digest. In the age of the internet, filtration of all this information has become the main challenge, and technologies from TV Guide to Google to Pandora have done a pretty good job of it. The technological challenge that remains, however, is cognition and understanding. Where is the machine that helps me understand how weather affects daily life, from alcoholism rates to the type of cloth preferred by climate? Sure, there are studies that pick apart the macro data to focus on micro issues, but they’re inevitably a kludge of imperfect data analyzed by programs designed to simplify complex issues into bullet points our brains can handle. I don’t think that’s good enough.
We now live in the era of big data. Everything is out there, but we don’t really know what to do with it. We are not literate in data, and for most people, thinking about a problem in terms of raw data rather than outcomes and conclusions is a difficult exercise. I love Malcolm Gladwell’s book Blink because it gave us a peek at the problem. To paraphrase, there are two parts of your brain–the part that thinks and is reading this word and sounding out the vowels and consonants in your head, and the part that reacts and subconsciously comes to conclusions. The part of your brain that thinks is wired to assume that the subconscious part of your brain is making good decisions when it reacts, but Gladwell questions this. Do you really have enough good data stored away in the recesses of your memory to make good “gut” decisions?
To determine the quality of a decision, you have to know whether there is enough data, and whether the data itself is correct. Data analysts refer to the problem of low quality data as “garbage in, garbage out.” Think about your subconscious as a database kicking out statistics. If you grew up in a big city and every gun shot you ever heard was criminal violence, your “gut” will produce a statistic that says 99.9% of the time you hear a gun shot, something illegal just happened. If, however, you grew up in a rural area where hunting is normal, you would have a very different data record in your brain. Your database would say that 95% of the time if you hear a gun shot it’s probably just a hunter. That city person would assume the wrong thing if they went to the country and heard a gun shot. Their gut would mislead them, not because their internal data was incorrect, it was simply incorrect out of context. That person’s internal database wouldn’t be big enough to produce a correct “gut reaction” all the time.
This is where most of the books out there on the subject end (Blink, Predictably Irrational, Subliminal). They simply want us to be aware that our gut reaction is capable of being wrong when we think it’s right, and vice versa, and that we should actively think about what data is going into these subconscious reactions. What they don’t do is go on to give us any sort of framework for fixing the problem. They simply tell us to hand over the decision making to the experts when we realize we don’t have the depth or breadth of knowledge to make accurate predictions. But, what if those experts are prohibitively expensive or the “experts” are just marketers steering you into the arms of their clients? Why can’t we empower ourselves to fix problems like this?
I think technology is capable of radically changing the ways in which we make decisions, and in doing so, increase the accuracy of our decision-making. Just as we use weather forecasts to correct our occasionally inaccurate assumption that a bright, sunny morning means we won’t get rained on during an afternoon bike ride, we can use technology to help us identify bias and inaccuracy in the internal models our brains use to predict all sorts of other things. Just thinking out loud here…what if we had a GPS based threat indicator app on our phones that took into account who we are? A child’s threat indicator might start buzzing when he nears a dangerous intersection where data shows that pedestrians have been hurt before. My threat indicator might start buzzing when I walk into a store that has a bad Better Business Bureau rating. My work calendar could warn me when I schedule a meeting at an office where the management has been investigated or convicted of white collar crimes. In all these cases, technology helps push contextually relevant nudges to adjust for gaps in our intuition–that database of experience our faulty subconscious depends on.
Now, think about how such a system would be constructed. What you need is public data that everyone can agree on to use as the basis for your model. This is difficult to do in a capitalist society, because everyone is trying to profit off their data, or worried that someone else will. Certain data-sets, however, have been made publicly available for the greater good of everyone, and many more for the greater good of Google and the money they make from advertising. Weather data is relatively open-source. The correct time is available to all–no one has to calibrate their watch using calculations based on the sun. GPS is fairly open-source, allowing our devices to know where they, and by proxy we, are. The location of the next bus, train, or plane is open-source, allowing us to commute much more efficiently. Google Maps is an amazing open-source data-set (and is often taken for granted), and if you wanted to, you could quickly and easily head over to mapbox.com and plug the location of whatever you want directly into Google Maps.
I did a simple experiment to see if I was really spending my time the way I think I’m spending it. I tracked all my time by place using the Place Me app (uses GPS to look up addresses and figure out where you are), and was really surprised by what I found. The app isn’t 100% accurate, so I had to combine the time for about 10 places within a block of where I live that it thought I had visited in order to get to a single time-spent number for being home. Predictably, I spend the majority of my time at home, followed by the office, but the number three location on the list was my local coffee shop. I had no idea I was spending so much time there. My gut was clueless.
We run into problems when open-source data runs into quality issues–the garbage in, garbage out effect. There’s a lot of objectively false data floating around the internet, and because the internet wasn’t designed to capture objective data, it generally doesn’t. We’ve gotten used to big tech companies just giving us stuff that radically changes how we live our lives, but few people ever stop to wonder where all the data that goes into their products comes from, or what data trails we create in our own lives. Maybe it’s time that data literacy should be taught in schools. Perhaps in a decade or less, we’ll be ready to move from linear story-telling to real-time complex models.
A normal person living today has as much data at their fingertips as a high-paid McKinsey consultant did 20 years ago. The rapid democratization of data and computing power easily brings boat-loads of information right to your cell phone. New filtration systems allow us to zero in on the important bits of information and pull precise bits from petabytes. Where we still fall flat on our faces, however, is getting past our own brains, and our own biases, in order to make better decisions.