Archive for the ‘Marketing’ Category
[I've tweeted about this before.]
If fashion stores believed in A/B testing, they would probably only sell white XXL shirts. Most customers would fit tent-sized garments; most colours go well with white. Giant colourless shirts would presumably have the better sales conversion rate by far.
But of course this would be far from optimal.
Customers come in different shapes and sizes. If you really want to maximise conversion, you will have to tailor to their specific needs and personal preferences. A/B testing might be the latest fashion, but the truth is that some customers will have a taste for B even though the majority might fancy A. This is why these 20 lines of code will beat A/B testing every time.
The trick is not to figure out whether A is better than B, but when A is better than B; and for whom.
Marketing should not be one-size-fits-all.
Marketing catchphrases like “recommended by experts” (an appeal to authority), “world-renowned bestseller” (candidly claiming consensus) and “limited supply only” (suggesting scarcity) are widely used to promote many different types of products. To a marketeer, these persuasion tactics are like universally coaxing super supplements that can make just about any offer seem more enticing.
But not all these advertising additives are created equal; and neither are apparently all consumers.
In a fascinating (at least, to scientific advertising geeks like me) study titled “Heterogeneity in the Effects of Online Persuasion“, social scientists Maurits Kaptein and Dean Eckles looked at the differences in susceptibility to varying influence tactics between individuals. What they found may change the way we think about recommendation engines and marketing personalization in general.
It is striking how large the heterogeneity is relative to the average effects of each of the influence strategies. Even though the overall effects of both the authority and consensus strategies were significantly positive, the estimates of the effects of these strategies was negative for many participants. [...] Employing the “wrong” strategy for an individual can have negative effects compared with no strategy at all; and the present results suggest there are many people for whom the included strategies have negative effects.
Our advertising additives can have adverse side-effects. Some people don’t respond well to authority; others don’t feel much for the majority rule. If you pick the wrong strategy for a particular individual, you may actually hurt your marketing efforts; independent of what product you are actually trying to sell.
Kaptein also collaborated in another publication “Means Based Adaptive Persuasive Systems“, which looked at the combined effects of multiple persuasion strategies.
Contrary to intuition, having multiple sources of advice agree on the recommendation had not only no positive impact on compliance levels but actually had a slightly negative effect when compared to the preferred strategy. This is a fascinating discovery since one would assume two agreeing opinions would be stronger than one.
As strange as it may seem, in the case of combined cajolery, the whole is not only less than the sum of its parts; it is less than the single best bit.
1 + 3 = 2
Eckles and Kaptein conclude that personalization is key; and I couldn’t agree more.
To use the results presented above influencers will have to create implementations of distinct influence strategies to support product representations or customer calls to action. As in the two studies presented above, multiple implementations of influence strategies can be created and presented separately. Thus, one can support a product presentation on an e-commerce website by an implementation of the scarcity strategy (“This product is almost out of stock”) or by an implementation of the consensus strategy (“Over a million copies sold”). If technically one is able to represent these different strategies together with the product presentations, identify distinct customers, and measure the effect of the influence strategy on the customer, then one can dynamically select an influence strategy for each customer.
The good news is that we can do this today. Using Oracle Real-Time Decisions, choosing the best influence strategy for a particular customer can easily be implemented as a separate decision to be optimized for conversion. Alternatively, these strategies could simply be considered as another facet of your assets in RTD; similar to the way we would utilize product category metadata to share learnings across promotions.
Personalization is about more than just deciding what you want to sell. This research clearly shows that a recommendation engine that can only select the “best” product is simply not good enough.
Because conversion sometimes requires a little persuasion.
Without fail, a company will employ a recommendation engine for a purpose (nobody does this for fun, really). Often, that purpose is profit (or something along those line). For most companies, ’relevance’ is irrelevant (no pun intended).
The success of any recommendation engine should (in my opinion) be measured by its ability to meet the objectives it was intended to achieve. As said, in most cases, this will be tied to sales or profit.
A/B test your system against a control (often random, might be rules). If your recommendations increase sales (or decrease costs, or decrease call handeling time, or increase revenue, or increase customer satisfaction, etc) compared to the alternative you’re doing pretty good. You can forget about the rest.
Who cares about relevance if you can measure business value?
[ Posted on Quora as answer to What is the best way to test the relevance of a recommendation engine? ]
Facebook has recently discovered that beyond the uncanny valley of personalized marketing lies the bottomless pit of invasive identity misappropriation.
But there is a deeper problem here. I’ve said it before and I will say it again. Facebook has the data, but they do not have users with shopping intent. Nobody goes to Facebook to buy stuff. Facebook is for meeting friends, like a bar or a club.
Even with the best products in the world and the most detailed private information it is not easy to sell stuff to strangers in bars; unless you’re selling beer.
Following our last discussion on
[prospect.subject_area] I think the following article would be of particular interest to you.
Seth Godin writes.
Sure, it’s easy to grab a first name from a database or glean some info from a profile.
But when you pretend to know me, you’ve already started our relationship with a lie. You’ve cheapened the tools we use to recognize each other and you’ve tricked me, at least a little.
Increased familiarity begets heightened expectations. Personalization has its own uncanny valley.
The uncanny valley is a hypothesis in the field of robotics and 3D computer animation, which holds that when human replicas look and act almost, but not perfectly, like actual human beings, it causes a response of revulsion among human observers.
When you treat your customers as though you know them personally they will be personally offended if you do not. Beware of the eerie hollow of broken promise.
About 60% of the people stopped when we had 24 jams on display and then at the times when we had 6 different flavors of jam out on display only 40% of the people actually stopped, so more people were clearly attracted to the larger varieties of options, but then when it came down to buying, so the second thing we looked at is in what case were people more likely to buy a jar of jam.
What we found was that of the people who stopped when there were 24 different flavors of jam out on display only 3% of them actually bought a jar of jam whereas of the people who stopped when there were 6 different flavors of jam 30% of them actually bought a jar of jam. So, if you do the math, people were actually 6 times more likely to buy a jar of jam if they had encountered 6 than if they encountered 24, so what we learned from this study was that while people were more attracted to having more options, that’s what sort of got them in the door or got them to think about jam, when it came to choosing time they were actually less likely to make a choice if they had more to choose from than if they had fewer to choose from.
A fascinating psychological effect with clear implications for display advertising, but there is a lesson here for online marketeers and analysts as well.
In this study, fewer people stopped when there was less choice, but more people actually bought something. If we were only measuring the former (i.e. attention), and not the latter (i.e. sales), we would be led to think more choice would be about 50% more effective at bringing in customers. And boy, would we be wrong!
Don’t get yourself in a jam; remember this next time you decide to measure click acceptance instead of actual sales to drive your online marketing effort. Clickthrough rates are useful as a measure by proxy, but they can be misleading.
Your accountant might care about the facts. You, the marketer, need to care about the conversations and the memories.
I recognize the sentiment, but think that’s only partially true.
Individual consumer perception might be the result of conversations and memories, but marketing to consumers as a group is also about numbers. Results should be measurable, lest a company risk investing a lot of money not just in stories, but in fairytales.
You, the marketer, need to care about the conversations, memories and the facts. You need to be an accountant as well as a storyteller.
It seems that my little rant against the apparent lack of scientific rigor and the use of data to analyse performance in the world of advertising is nothing new. Scientific Advertising, written in 1923 by advertising icon Claude C. Hopkins, lays out a convincing argument in favor of the use of an empirical and results-oriented approach in all marketing.
The bottom line of this argument is the same as the true bottom line for every company. It’s all about the money.
Scientific advertising is impossible without [knowing your results]. So is safe advertising. So is maximum profit.
Groping in the dark in this field has probably cost enough money to pay the national debt. That is what has filled the advertising graveyards. That is what has discouraged thousands who could profit in this field. And the dawn of knowledge is what is bringing a new day in the advertising world.
Hopkins pioneered the use of keyed coupons to track the success of different campaigns and ads. He believed that the only purpose of advertising was to sell more products and that the effects of such efforts should be measurable and those responsible be held accountable. New ideas and concepts should be tried in a small, controlled and safe setting so that their (monetary) results could be measured and analyzed.
Only when a new approach proved to be successful in a number of trails could it be trusted to be applied at larger scale. Take this passage from his autobiography My Life in Advertising.
How have I been able to win from this situation so many great successes? Simply because I made so many mistakes in a small way, and learned something from each. I made no mistake twice. Every once in a while I developed some great advertising principle. That endured.
The technology of the time allowed Hopkins et al. to try new things and make mistakes only on a per town basis. Results had to be analyzed manually and each iteration required significant effort and some investment. Still, what knowledge could be gleaned from these relatively small scale ventures proved key to Hopkins’ success in advertising.
Judging by his own accounts it never occurred to Hopkins that different ads would have different results for different towns or different people. He was simply empirically searching for the perfect ad; one town at a time. Once found, this super ad would be unleashed upon the entire nation.
In that light, Oracle Real-Time Decisions (RTD) is like the traditional scientific advertising method on steroids. Not only does it apply the concepts of empirically testing success and failing in small doses and learns from those mistakes automatically; it is also able to segment the respondents into an seemingly infinite number of sub-groups and find a super ad for each.
Computing Science has taken Scientific Advertising to the next level; and you cannot afford not to follow.
[As a side note, I think it important to realize that Scientific Advertising was written before Edward Bernays took his uncle's ideas and used them to revolutionize the field of advertising. Thanks to Hopkins's scientific and empirical approach most of the facts and results cited still hold water, but some of the explanations and conclusions he puts forward are terribly outdated. If you want to know more, I can highly recommend Adam Curtis's award-winning 2002 documentary for the BBC, The Century of Self.]
3. Gratuitous use of Flash.
It is not Adobe’s fault, it is your fault for using Flash for the most pathetic things mankind has known. Why? Because your agency can win an award? Because you believe that the Web is essentially TV? Slow sites make your management happy?
Remember every time you use flash on your website, a cute puppy dies. Think of the puppy!
Most of the items on this extended list compiled by Avinash Kaushik seem pretty obvious to me. Most likely you, my internet-savvy friends, would question even the need for writing these down like Avinash has.
Oh, how ignorance is bliss, my friends!
My work with Real-time Decisions (RTD) has introduced me into the wonderful world of marketing and advertising. A world where many companies still consider basic concepts like scientific control to be the latest craze (they call it ‘A/B testing’, but the idea remains the same). A world where gut feeling and HiPPO (highest paid person’s opinion) continue to rule, while real data and hard evidence are readily available.
The fact that Avinash felt the need to publish his list of truisms might explain some of the difficulty in conveying the concepts behind RTD to customers. Number ten on the list in particular sounds sounds awfully familiar to me.
10. Making lame metrics the measures of success: Impressions, Click-throughs, Page Views.
They, and their brethren like video views and emails sent and # of followers on Twitter and Likes on Facebook and. . . all stink worse than Amorphophallus Titanum.
Use metrics that matter: Loyalty, Recency, Net Profit, Conversation Rate, Message Amplification, Brand Evangelist Index, Customer Lifetime Value and so on and so forth. Each a glorious magnificent metric that truly tells you that value was delivered, or delivers the swift kick in the pants that we all need when we don’t. How can you not love that?
If these people are not at ease with the idea of using cold data to measure success, or if they simply do not know how to define ‘success’ in the first place, how do you think they are going to feel about letting an artificially intelligent computer program improve the rate of that success by learning about customer behavior and preference? Not so good, I guess.
David Lightman: [to Joshua] Come on. Learn, goddammit.