Archive for the ‘Psychology’ Category
You could say fine dining is a bit of a hobby of mine; and as I’ve mentioned before, I’ve composed quite a few restaurant reviews over the years. I enjoy writing about food almost as much as I love eating it.
Whilst fantasising about fancy food with a colleague the other day, we wondered whether there is any relation between the lengthiness of my reviews and the associated score. In some strange way it made intuitive sense to me that I would devote more words to describe why a particular restaurant did not live up to my expectations.
Thinking about this, the first negative review that came to my mind was one I wrote for “The Good View” in Chiang Mai, Thailand.
If service were any slower than it already is, cobwebs would certainly overrun the place. When food and drinks eventually do arrive they’re hardly worth the wait.
Fruit juice contained more sugar than a Banglamphu brothel and cocktails had less alcohol in them than a Buddhist monk. The mixed Northern specialties appetizer revealed itself to be three kinds of sausage and some raw chillies; very special indeed.
The spicy papaya salad probably tasted alright, but I was unable to tell because my taste buds were destroyed on the first bite. (Yes, I see the irony in complaining a spicy papaya salad was too spicy, but in my mind there’s a difference between spicy food and napalm.)
Also, the view is terribly overrated.
Conversely, the first positive review that popped into my brain was this rather terse piece for “Opium” in Utrecht, the Netherlands.
Om nom nom.
Judging by this tiny sample there might indeed be something to the hypothesis that review length and review score are negatively correlated. To confirm my hunch, I decided to load my reviews into R for a proper statistical analysis.
> cor.test(nn_reviews$char_count, nn_reviews$score) Pearson's product-moment correlation data: nn_reviews$char_count and nn_reviews$score t = 0.2246, df = 121, p-value = 0.8227 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.1571892 0.1967366 sample estimates: cor 0.02041319
To my surprise, the analysis shows there is practically no relation between length and score. Contrary to what the two reviews above seem to suggest I do not require more letters to describe an unpleasant dining experience as opposed to a pleasant one.
A simple plot of the two variables gives some insight into a possible cause for my misconception.
The outlier in the bottom right happens to represent my review for the Good View. All my other reviews are much shorter in length and seem to be quite evenly distributed over the different scores.
My misjudgement is an excellent example of the availability heuristic. The pair of examples that presented themselves to me upon initial reflection were not representative of the complete set, but that did not stop me from drawing overarching, and incorrect, conclusions based on a sample of two.
This is why I use statistics, because I am a fallible human being; just like everyone else
[I've tweeted about this before.]
If fashion stores believed in A/B testing, they would probably only sell white XXL shirts. Most customers would fit tent-sized garments; most colours go well with white. Giant colourless shirts would presumably have the better sales conversion rate by far.
But of course this would be far from optimal.
Customers come in different shapes and sizes. If you really want to maximise conversion, you will have to tailor to their specific needs and personal preferences. A/B testing might be the latest fashion, but the truth is that some customers will have a taste for B even though the majority might fancy A. This is why these 20 lines of code will beat A/B testing every time.
The trick is not to figure out whether A is better than B, but when A is better than B; and for whom.
Marketing should not be one-size-fits-all.
Marketing catchphrases like “recommended by experts” (an appeal to authority), “world-renowned bestseller” (candidly claiming consensus) and “limited supply only” (suggesting scarcity) are widely used to promote many different types of products. To a marketeer, these persuasion tactics are like universally coaxing super supplements that can make just about any offer seem more enticing.
But not all these advertising additives are created equal; and neither are apparently all consumers.
In a fascinating (at least, to scientific advertising geeks like me) study titled “Heterogeneity in the Effects of Online Persuasion“, social scientists Maurits Kaptein and Dean Eckles looked at the differences in susceptibility to varying influence tactics between individuals. What they found may change the way we think about recommendation engines and marketing personalization in general.
It is striking how large the heterogeneity is relative to the average effects of each of the influence strategies. Even though the overall effects of both the authority and consensus strategies were significantly positive, the estimates of the effects of these strategies was negative for many participants. [...] Employing the “wrong” strategy for an individual can have negative effects compared with no strategy at all; and the present results suggest there are many people for whom the included strategies have negative effects.
Our advertising additives can have adverse side-effects. Some people don’t respond well to authority; others don’t feel much for the majority rule. If you pick the wrong strategy for a particular individual, you may actually hurt your marketing efforts; independent of what product you are actually trying to sell.
Kaptein also collaborated in another publication “Means Based Adaptive Persuasive Systems“, which looked at the combined effects of multiple persuasion strategies.
Contrary to intuition, having multiple sources of advice agree on the recommendation had not only no positive impact on compliance levels but actually had a slightly negative effect when compared to the preferred strategy. This is a fascinating discovery since one would assume two agreeing opinions would be stronger than one.
As strange as it may seem, in the case of combined cajolery, the whole is not only less than the sum of its parts; it is less than the single best bit.
1 + 3 = 2
Eckles and Kaptein conclude that personalization is key; and I couldn’t agree more.
To use the results presented above influencers will have to create implementations of distinct influence strategies to support product representations or customer calls to action. As in the two studies presented above, multiple implementations of influence strategies can be created and presented separately. Thus, one can support a product presentation on an e-commerce website by an implementation of the scarcity strategy (“This product is almost out of stock”) or by an implementation of the consensus strategy (“Over a million copies sold”). If technically one is able to represent these different strategies together with the product presentations, identify distinct customers, and measure the effect of the influence strategy on the customer, then one can dynamically select an influence strategy for each customer.
The good news is that we can do this today. Using Oracle Real-Time Decisions, choosing the best influence strategy for a particular customer can easily be implemented as a separate decision to be optimized for conversion. Alternatively, these strategies could simply be considered as another facet of your assets in RTD; similar to the way we would utilize product category metadata to share learnings across promotions.
Personalization is about more than just deciding what you want to sell. This research clearly shows that a recommendation engine that can only select the “best” product is simply not good enough.
Because conversion sometimes requires a little persuasion.
Following our last discussion on
[prospect.subject_area] I think the following article would be of particular interest to you.
Seth Godin writes.
Sure, it’s easy to grab a first name from a database or glean some info from a profile.
But when you pretend to know me, you’ve already started our relationship with a lie. You’ve cheapened the tools we use to recognize each other and you’ve tricked me, at least a little.
Increased familiarity begets heightened expectations. Personalization has its own uncanny valley.
The uncanny valley is a hypothesis in the field of robotics and 3D computer animation, which holds that when human replicas look and act almost, but not perfectly, like actual human beings, it causes a response of revulsion among human observers.
When you treat your customers as though you know them personally they will be personally offended if you do not. Beware of the eerie hollow of broken promise.
Classical conditioning is underrated. Too many bad spy movies have taught us that ‘brainwashing’ is bad.
But conditioning can be a powerful tool for self-improvement. I’ve deliberately been playing the Brian Eno song Thursday Afternoon every time I felt myself immersed in ‘the zone‘. In my mind, the track and the mental state have now become intricately linked. This is so much the case that I can now descend into productivity Walhalla simply by listening to my personal work anthem.
In effect, I’ve brainwashed myself to work better in response to a particular tune.
There is nothing special about this trick. Anyone can do it and almost no real effort is required.
A few guidelines.
- Choose a song that is long. Not a two minute ditty. This will also help for the next prerequisite.
- Choose a song that can stand to be repeated. You’ll want to be productive for longer than one play.
- Choose a song without lyrics. This is more personal. To me, words and melody are distracting.
- Choose a song that is timeless. Something you wouldn’t mind listening to in a few years time.
- Choose a song that is not a classic. Classics are played on the radio. That is not what you want.
- Carry your song with you always. You need to be ready. Productivity can strike at any moment.
- Play your song every time you are in the zone. Especially initially you want the bonding to be strong.
- Play your song without interruptions. Interruptions kill productivity. Interruptions break the spell.
- Never play your song when you are not in the zone. That would break the spell. Don’t do it.
- Don’t overuse. There are limits to how productive you can be. This trick does not fix that.
- Don’t expect magic. The song will not always work. If it doesn’t work, stop listening right away.
Have I missed anything important? Feel free to add your tips and tricks in the comments below.
About 60% of the people stopped when we had 24 jams on display and then at the times when we had 6 different flavors of jam out on display only 40% of the people actually stopped, so more people were clearly attracted to the larger varieties of options, but then when it came down to buying, so the second thing we looked at is in what case were people more likely to buy a jar of jam.
What we found was that of the people who stopped when there were 24 different flavors of jam out on display only 3% of them actually bought a jar of jam whereas of the people who stopped when there were 6 different flavors of jam 30% of them actually bought a jar of jam. So, if you do the math, people were actually 6 times more likely to buy a jar of jam if they had encountered 6 than if they encountered 24, so what we learned from this study was that while people were more attracted to having more options, that’s what sort of got them in the door or got them to think about jam, when it came to choosing time they were actually less likely to make a choice if they had more to choose from than if they had fewer to choose from.
A fascinating psychological effect with clear implications for display advertising, but there is a lesson here for online marketeers and analysts as well.
In this study, fewer people stopped when there was less choice, but more people actually bought something. If we were only measuring the former (i.e. attention), and not the latter (i.e. sales), we would be led to think more choice would be about 50% more effective at bringing in customers. And boy, would we be wrong!
Don’t get yourself in a jam; remember this next time you decide to measure click acceptance instead of actual sales to drive your online marketing effort. Clickthrough rates are useful as a measure by proxy, but they can be misleading.