Ad:Tech 2013


This year for Ad:Tech I did something I’ve never done before–I worked a booth in the exhibit hall. Usually I hang out in the conference rooms with big name marketers and exchange ideas with keynote speakers, and it’s rare that I have time to make it to the expo floor. However, we’re working with a very cool start-up called IMRSV that uses software to track people as they walk in front of a simple webcam and they invited me to co-curate a booth. Their technology guesses people’s age and gender in real time, and is an incredibly easy way to get metrics about people in physical space without breaking the privacy barrier. Everything is anonymous and no images are saved, only data about the number of people and type of people walking in front of the webcam. We had cameras set up all over the building and were actively quantifying the traffic of people around the show. So, going into this, I thought it was going to be cool. It was not.

Let me now give you my highly biased, subjective, and purely qualitative view of the foot traffic we got at our booth. Over the course of a day spent explaining the technology and how IPG Lab uses it, I had roughly 5 intelligent conversations. The vast majority of people I spoke with were sad, desperate, weirdos. There was the Russian spammer, the guy who runs free give-away websites, the dozens of bad email marketing, direct marketing, SEO marketing, bottom-feeder worst-of-the-worst inventory ad network sales guys, and finally, the people that were very concerned that the software was unable to automatically detect when they were flipping their middle finger at the camera, despite the fact that it did correctly guess their age and gender within seconds.

It was this pile of winners I’d like to talk about. I realized as I looked out on the teeming masses of internet assholes that I am incredibly sheltered. I forget that working at a place where we actually try to make the internet better is relatively rare compared to the huge majority of internet businesses just trying to spam, game, or otherwise make a quick buck off of the frictionless mistakes of a million accidental clickers. And, I realized that I viciously hate these people.

I guess the point to take out of all this is that as much as big media can suck at times, they’re safe. The world of media is a lot like the real world in the sense that anarchy sounds great when you’re flush and comfortable, but when the shit hits the fan and a mob is marching towards your front door, suddenly you can’t wait to find the police. I love the anarchy of the internet, the democratization of information, and the way technological disruption is keeping big media on its toes, but I have to say, days like this I’m really glad that the giants behind sites like Hulu, NYTimes, and Netflix create such cozy and safe little media havens where I can hide myself away from the masses and enjoy my media in comfort and safety.

P.S. Russian spam guy, please don’t hack my site. I know you can. You don’t have to prove anything.

The Era of Big Feelings

[This post was originally written for, then edited for this blog. On a personal side-note, I totally just coined the phrase “Big Feelings.” Go me.]


At the IPG Media Lab, we often engage in what we call “Quantitative Qualitative.” In other words, we listen for Qualitative information, like how you feel when a pop-up ad startles you, or the rush you get when a web service seemingly reads your mind and suggests the exact thing you were looking for, but we do it on a very large scale. We don’t just want to know what a dozen people behind a two-way mirror think or feel, we want to know what hundreds or thousands of people think or feel, and be able to probe the nuance and variance in feeling that become available with large datasets.

“Big Data” is a term that gets thrown around a lot, but the information stored in these databases tends to be easily quantifiable types of information that are black or white, hard, objective sorts of things. Big datasets are rarely made up of odd, squishy subjective things, like emotions. In practice, Big Data means being able to collect large amounts of data, but also be able to do something productive with it—to analyze it for insight. Some qualitative researchers are already making the shift to quantifying qualitative info–where they used to be dependent on small focus groups, they now skip them in favor of mining social networks for product sentiment. Why ask twelve housewives what they think when you can mine a million tweets, then run a semantic sentiment analysis to get unvarnished qualitative feedback from thousands of actual product users? To an extent, this is progress, but it’s also an invitation to listen to the fringe rather than the mainstream (do you really think the person venting about their vacuum cleaner on twitter represents you?). So, while more information is generally better than less, I have to wonder if we’re simply exchanging one set of biases for another.

A technology that has intrigued us and led to some fascinating work here at the lab is webcam based facial coding. The way facial coding works is by asking a panelist to turn on their webcam, in their own home, on their own computer, then using software to identify points on their face and infer emotion as we watch them react to whatever stimuli we put on screen. We use the term valence to describe the range of emotion they exhibet, which can be positive or negative. By getting hundreds of people to do this we start seeing patterns and can predict reaction. To be fair, it can be tricky to work with this data. People rarely tend to register much emotion when presented with a bland website, boring television show, or predictably pandering advertisement. However, when the content is good, suddenly the data comes to life. I have to wonder if years of lousy focus groups have convinced content creators that they need to dumb down their work to suit the lowest common denominator viewer. If we were to employ facial coding and base our optimization on manifestations of joy and laughter, or intensity of engagement, would the media landscape look like a very different place? What if we program our algorithms to listen for laughter, and rewarded the audience with more of the same?

I’m not entirely sure where this path of exploration will lead, but I can say that in our early tests we have already found some very interesting findings. To share just one, while working on a project with the Undertone video ad network, we watched the facial expressions of people as they were exposed to either A) a video ad that launched automatically as an auto-play ad, or B) a video ad that launched upon user-click, as a user-initiated ad. It sounds obvious, in retrospect, that auto-play ads annoy people. But, we found it fascinating that we were able to quantify the aggregate delta of valence (positive vs. negative emotion) as real people were randomly exposed to the click-to-play ad vs. the auto-play ad.

In the example below from the study we did with Undertone, you can see that valence slightly decreased (indicating a negative change in emotion) when the auto-play ad launched compared to the base-line of how they felt about the content on the page, while valence increased (indicating a positive change in emotion) when the ad was navigated to and clicked on. Something as simple as expecting the ad to play vs. being surprised by it made a big difference. You can see that in the overall valence numbers during the time the ad played, which show a 3% increase from neutral (0% valence) for the click-to-play cell.


This is certainly a small, tentative step towards using this technology in a meaningful way, but we feel it is an important one. We at the IPG Lab plan to keep pushing the boundaries of media insight by mashing up new technologies and quantifying qualitative data.

If you’re interesting in learning more about the Undertone Research, feel free to watch the webcast, get in touch with the good folks at Undertone, contact me, or come see us present this study live at the 2013 ARF Re:Think Conference in New York City.

Screen Equality (did not see this coming)

The narration on this video is a little intense, but it does a good job of showing off how I spent most of May and June, 2012. I’ll be presenting the findings for this study along with YuMe at the 2013 ARF Re:Think Conference in New York. Not to spoil your viewing experience, but the really interesting thing we discovered here is that…. [continued below]




…ad clutter has a lot more to do with ad effectiveness than does the device on which you view the ad. Surprising even me (pretty darn rare), the notion that people watch video on their mobile devices while on the go was largely disproven. Sure, if you have 30 seconds to kill while waiting for the train you might watch a short clip, but longer-format video and any serious viewing time on a phone or tablet is reportedly done while in bed or sitting on a couch. It’s definitely what I do, and it makes sense intuitively that other people would act this way, but it seems like the industry hasn’t yet wrapped it’s head around the idea that consumers don’t really differentiate between screens if the content is the same. Given the choice between Hulu and 4 minutes of ads in a show vs. 8 minutes in a live broadcast, the winner is generally Hulu or a DVR. Hulu, and online video providers like them, may well be the best bet advertisers and publisher can make. After all, if no one pays attention to the ads, why should anyone pay to advertise?

Honesty. The killer app of bio-metric data?

As a researcher, I spend a lot of time thinking about bias. I’m constantly working to make sure external bias doesn’t affect my work, and that I myself don’t accidentally introduce my own biases to research outcomes. There are a lot of common types of bias (feel free to get lost in this list over on Quora for a while). The first bias that pops off the page in the context of what I want to talk about today is “Self-Serving Attributions: Explanations for one’s successes that credit internal, dispositional factors and explanations for one’s failures that blame external, situational factors.” So what does this have to do with bio-metric data?

For the last few months I’ve been carrying around a fitbit. It’s basically just a digital pedometer, but what makes it interesting is that it wirelessly syncs with a tiny base-station every time I walk by the computer in my apartment, then broadcasts the data to the internet and any fitbit “friends” I have on the fitbit social website. My colleague Chad at the IPG Lab in NYC and my Dad in Chicago also have fitbits and can both log in any time and see how active I’ve been, and I can do the same for them. I can also see all of my own data cataloged, aggregated, and displayed in the form of graphs or comparative indices.

When I think about myself, I feel like I’m a pretty active guy. I walk to work, I get out at lunch, I exercise fairly frequently, and yet somehow I am magically not losing weight. My self-serving bias tells me it’s clearly my bad genes or the extra butter snuck into all the food I eat at restaurants. But, then I look at the data and it forces me to be honest with myself.

The simple reality is that on a normal day I’m sitting on my butt or asleep 84% of the time. The size of that big old lazy gray pie slice below came as a bit of a shock the first time I saw it, because it created a moment of uncomfortable cognitive dissonance the moment I became aware of my own bias.

All that data stacks up over time and I’m left with a very precise picture of why my love handles refuse to melt. This is undeniably uncomfortable at first, but it quickly becomes normal once I accept the truth of the data, and that’s where things start getting interesting.

My Dad has lost 50 pounds since we started watching each others’ tracked fitness goals, I’ve lost about 12 pounds, and we both agree that our success is due largely to being more honest with ourselves, but also in large part because we’re more honest with each other. There’s a constant social pressure to keep ourselves healthy and keep each other healthy. The fitbit, however, is just the beginning of what’s possible.

Companies like Massive Health are creating smart phone apps that turn the phone itself into a networked bio-metric sensor, enabling individuals to track their own health metrics and allowing doctors to get a much more complete picture of how healthy their patients are. This is cool by itself, since it democratizes health care in a very real way and increases the quality of preventive care. The aspect of this, however, that really gets me excited, isn’t how I’ll interact with my Doctor using these new apps, but all the new ways in which I’ll be able to interact with people like my Dad, or my friends and my co-workers.

For example, we know that one of the most corrosive factors for human health are high levels of chronic stress. It would be a fairly simple thing to measure stress levels in real time using a wrist-band or watch with bio-metric sensors that’s synced via blue-tooth to a smart phone, which in turn broadcasts the data to the web. I could broadcast my stress level directly to my boss or Dad. On days when my stress levels are spiking repeatedly, my boss could use the information to reconsider giving me that extra project, or asking me to stay late, or he could simply walk over to see what’s up instead of delegating via email as usual. My Dad could set up an alert to have the system ping him when I have a particularly bad day and then give me a call, increasing my sense of community and well-being despite the fact that we live on different ends of the country. A prolonged period of stress could trigger a reminder to myself to sign up for a massage and get more sleep.

The point is, more measurement brings greater understanding. Seeing the bio-metric data of others has the amazing effect of encouraging empathy. This brings up another form of bias from that list on Quora: “Actor/Observer Difference: The tendency to see other people’s behavior as dispositionally caused but focusing more on the role of situational factors when explaining one’s own behavior.” In other words, without the window into my bio-metric stress data, I’m just an asshole to outside observers on the days I’m stressed, but with that data, suddenly it can be tied to stimuli, and just like that big grey pie slice in the chart above, it can surprise the hell out of someone.

I won’t go into it in too much detail here, but I can think of a million ways that ubiquitous web-broadcast bio-metric data can be put to use for the good of all.

As one example of the way this data was used in the real world, I’ll show you a case-study my lab did with CNET. As part of a shop-along ethnography study commissioned by CNET, the lab hooked up a group of people with bio-metric bracelets and watched them shop for electronics in a store with and without CNET’s shopping app (it provides reviews and info on the items people were shopping for in the store). They found that people’s stress levels declined when given all that extra data via the shopping app to help them make the right decision, and that stress levels increased when “helped” by the store clerks. You can check out more about the study on the blog.

I truly think this is the start of something big. I have no idea where it’s going to lead, but I think it’s going to be real step in the evolution of media, and possibly humanity. People increasingly deal with each other via digital media, and in doing so give up a lot of the body language we rely on as humans to understand each other. Companies like Eloqua are just barely starting to scratch the surface of what’s possible in online media once you start listening for “digital body language.” Just imagine what all this new bio-metric data will do for humanizing the web.

It’s so important that everyone in technology today understand one fundamental truth: at the other end of the internet connection is a human being. Let’s never get so lost in the data that we forget that. To not do so would be to commit my favorite form of bias: “Wishful thinking is the formation of beliefs and making decisions according to what might be pleasing to imagine instead of by appealing to evidence, rationality or reality.”


Quantifiable Excuse

In my last post, I wrote that I was terribly busy and would blog more, and in the next post you’ll see here, I intend to talk about the idea of quantifying your life and my own experiments with bio-metric feedback loops and passively collected data. It dawned on me as I was writing the next post that I could create something that fits right in the sweet spot between the two, which wound up being this post.

I say I’m too busy to write, but am I? Today, I have some time on my hands, which is why today, and not yesterday or tomorrow, I will quantify my excuse: I have been too busy to blog and probably will be again soon. Let’s have a look at the data.

Following is a graph with a little over a year’s worth of data across three data points that encompass about 90% of what I spend my waking “work” time doing–the progress I made on the novel I’ve been writing (yes, the novel) against a guesstimate of the total number of research projects I was actively working on (all of which require lots of reading and writing as I delve into and explain new topics) and, of course, the blog posts I’ve written. Novel progress is defined simply by the word count of each successive draft I saved over time.


Before I graphed this I kept guiltily finding myself wondering if I fall into the clichéd camp of people who excitedly start a blog, pump out a ton of posts at first, then fade away as the upkeep of the blog outweighs the benefits of having a blog. However, as I see the ebb and flow of all my work in one place, I realize that I’m constantly writing. Suddenly instead of feeling guilty, I simply feel at capacity; less like a cliché and more like the super-smart Seth Godin, who famously (in my circles at least) decided not to use Twitter in order to focus on his blog and writing books. The reality of time is that it’s finite, and is one of the few truly universally limiting factors in life. So, I do think that as the novel writing process winds to a close (I finally managed to send a near-final draft off to a friend to edit) I’ll shift back to blogging. But, importantly (to me anyway) I realize now that if I don’t, I’m sure I’ll have a good excuse.

Catching Up

Starting in January 2011 I became the one-man research department for the IPG Media Lab. If you’re not familiar with the Lab, imagine the place where the scarily accurate marketing technology featured in the movie Minority Report was invented, and you have a pretty good idea of what it’s like. (or just press play)

Six months later, I finally have a minute to catch my breath and write a bit. So, sorry if this post is kind of lame–I’m just going to zip through what I’ve been up to. I intend to spice things up a bit moving forward and focus more on cool new tech instead of talking about myself. Anyways, you can get a taste here of the kind of work I’ve been doing lately:

#1. I wrote a lengthy report on Marketing Automation Best Practices for Econsultancy that you can check out here: It’s full of all sorts of fun tid-bits on how robots will be doing your job more efficiently in the immediate future. Yes, I’m serious.

#2. My first big project with the IPG Media Lab was for a cool online video ad network called YuMe. They had this wacky idea that TV isn’t really measured well since it doesn’t account for people simply ignoring ads–e.g. using a trip to the fridge as a low-tech DVR. So, we put together a high-tech way to monitor people as they watched TV and record their attention levels, then did the same for people watching video on the web to see how the two compare. I won’t ruin the surprise, but if you’re interested, my client put together this (slightly over-exuberant) video to show off the results.

High-tech Holidays

Ever since my family switched from whispered phone conversations about who wants what to a 100% adoption rate of Amazon wish lists, Christmas shopping (a task I generally loathe) has gotten much, much easier for me, and I actually get stuff I need and want. The only downside is the lack of spontaneity introduced when everyone knows what they’re getting well in advance of ripping open any wrapping paper. As much as I love the Amazon wish list, and hope never to return to the days of opening poorly fitting sweaters, I think we can do better.

One of my favorite places to shop for odd-ball gifts is Etsy. I mean, where else can you find hand-carved wooden iPad stands, or pillows custom-embroidered with an image of me picking my nose? The problem, of course, is that you have to sift through a ton of crap to find the gems. Etsy has always cultivated a community of people who will hunt out the best stuff for you and then create best-of lists for others to use, which I appreciate. However, this Christmas I discovered a new feature.

You can now search for gifts for people by letting Etsy mine their Facebook profile “likes” and then suggest appropriate gifts. Dad gets a home-made painting of Obama and Rush Limbaugh kung-fu fighting! The girlfriend gets vintage Laura Palmer gear (remember Twin Peaks?)! OK, not really. I know you two occasionally read this blog. But, the point is that I could have gotten these things!

As more and more information about us and our likes and dislikes gets strewn about the web, I’m sure more applications like this will keep popping up. I know everyone is sure we’re heading for some dystopian future because of this trend, but I’m actually pretty optimistic about it. Imagine how rude the person will seem when they approach me at a cocktail party in 2020 and simply start asking me questions without first consulting my online profile. Seems like such a waste of time, when we could instead jump right into the meat of the conversation we were destined to have about why we hate each others’ views. I’m mostly kidding. Seriously, I think this could be cool as long as there’s a nice balance between privacy and utility. I’d like to see more of this kind of thing.

Happy shopping everyone. I wish you a merry capitalist utopia and a debt-free new year.