The Era of Big Feelings

[This post was originally written for ipglab.com, then edited for this blog. On a personal side-note, I totally just coined the phrase “Big Feelings.” Go me.]

 

At the IPG Media Lab, we often engage in what we call “Quantitative Qualitative.” In other words, we listen for Qualitative information, like how you feel when a pop-up ad startles you, or the rush you get when a web service seemingly reads your mind and suggests the exact thing you were looking for, but we do it on a very large scale. We don’t just want to know what a dozen people behind a two-way mirror think or feel, we want to know what hundreds or thousands of people think or feel, and be able to probe the nuance and variance in feeling that become available with large datasets.

“Big Data” is a term that gets thrown around a lot, but the information stored in these databases tends to be easily quantifiable types of information that are black or white, hard, objective sorts of things. Big datasets are rarely made up of odd, squishy subjective things, like emotions. In practice, Big Data means being able to collect large amounts of data, but also be able to do something productive with it—to analyze it for insight. Some qualitative researchers are already making the shift to quantifying qualitative info–where they used to be dependent on small focus groups, they now skip them in favor of mining social networks for product sentiment. Why ask twelve housewives what they think when you can mine a million tweets, then run a semantic sentiment analysis to get unvarnished qualitative feedback from thousands of actual product users? To an extent, this is progress, but it’s also an invitation to listen to the fringe rather than the mainstream (do you really think the person venting about their vacuum cleaner on twitter represents you?). So, while more information is generally better than less, I have to wonder if we’re simply exchanging one set of biases for another.

A technology that has intrigued us and led to some fascinating work here at the lab is webcam based facial coding. The way facial coding works is by asking a panelist to turn on their webcam, in their own home, on their own computer, then using software to identify points on their face and infer emotion as we watch them react to whatever stimuli we put on screen. We use the term valence to describe the range of emotion they exhibet, which can be positive or negative. By getting hundreds of people to do this we start seeing patterns and can predict reaction. To be fair, it can be tricky to work with this data. People rarely tend to register much emotion when presented with a bland website, boring television show, or predictably pandering advertisement. However, when the content is good, suddenly the data comes to life. I have to wonder if years of lousy focus groups have convinced content creators that they need to dumb down their work to suit the lowest common denominator viewer. If we were to employ facial coding and base our optimization on manifestations of joy and laughter, or intensity of engagement, would the media landscape look like a very different place? What if we program our algorithms to listen for laughter, and rewarded the audience with more of the same?

I’m not entirely sure where this path of exploration will lead, but I can say that in our early tests we have already found some very interesting findings. To share just one, while working on a project with the Undertone video ad network, we watched the facial expressions of people as they were exposed to either A) a video ad that launched automatically as an auto-play ad, or B) a video ad that launched upon user-click, as a user-initiated ad. It sounds obvious, in retrospect, that auto-play ads annoy people. But, we found it fascinating that we were able to quantify the aggregate delta of valence (positive vs. negative emotion) as real people were randomly exposed to the click-to-play ad vs. the auto-play ad.

In the example below from the study we did with Undertone, you can see that valence slightly decreased (indicating a negative change in emotion) when the auto-play ad launched compared to the base-line of how they felt about the content on the page, while valence increased (indicating a positive change in emotion) when the ad was navigated to and clicked on. Something as simple as expecting the ad to play vs. being surprised by it made a big difference. You can see that in the overall valence numbers during the time the ad played, which show a 3% increase from neutral (0% valence) for the click-to-play cell.

 valence

This is certainly a small, tentative step towards using this technology in a meaningful way, but we feel it is an important one. We at the IPG Lab plan to keep pushing the boundaries of media insight by mashing up new technologies and quantifying qualitative data.

If you’re interesting in learning more about the Undertone Research, feel free to watch the webcast, get in touch with the good folks at Undertone, contact me, or come see us present this study live at the 2013 ARF Re:Think Conference in New York City.