Apparently, the Coke’s Facebook site has only less member’s than Obama’s. Makes one wonder. If those people have nothing better to do than swat and squawk at a soft drink’s Facebook page, do they have any money to actually buy it?
Tuesday, 1 December 2009
Tuesday, 23 December 2008
Why 40:40:20 is 40:40:20
So creative is less important than list and offer.
I never understood what this rule really means or how it came about. It probably means that ‘40% of the variation can be explained by the list’ and so on.
Anyway, let’s take the somewhat vague meaning given in the first paragraph. Can it lead to the conclusion that creative is relatively unimportant? Actually, it can’t. Because the creative is never randomly selected in a test! Each competing piece is produced by a competent team, judged, revised and, within the limits of practicality, perfected. Why on earth should you expect creative to make much difference?
To draw a parallel, take basketball players. All of them are giants. Naturally, height and muscles cease to make any difference.
Tuesday, 3 June 2008
Size of direct marketing industry in India
Monday, 14 April 2008
Just try responding
What do I mean? Pick up your newspaper or open one the spam mails in your email box. Call, email or write back. And see what happens. My bet is that nothing will.
If someone gets back, it will be with zero information, intelligence or interest.
Why do they waste their money to waste consumers' time? One of those many mysteries of modern marketing, I suppose.
Why do offers work
I was reading some interesting books lately, and a couple of them made threw light on offers. Investment expert Michael J. Mauboussin's More Than You Know discusses the principle of reciprocity. In commercial terms, it means if someone gives you something for free (say, a clothier offers you a soft drink while you are taking a look at his ware, before you have spent anything), you make it a point to reciprocate his gesture, usually by making a purchase.
This is most gratifying, because he uses the same as I did in my presentation on offers (http://pachatterjee.com/pdfs/offer.pdf), Influence: The Psychology of Persuasion by Robert B. Cialdini.
On the other hand, in Predictably Irrational, economist Dan Ariley talks about the amazing power of 'free', it's de-risking effect, and the way that switches off normal rational choosing.
I had included the de-rsiking bit in my presentation too, though I didn't know then about the amazing experiments and explanations that Dr Ariley writes about.
I'd very much like to know more about offers, especially about price-offs (5% off), 'bulk' discounts (buy 1, get one free) and gifts (refill free with pen). Would you know of any books, articles or research papers?
Friday, 28 March 2008
Zero
“Hardly any conversions... in single figures. The whole lead generation exercise was a waste.”
“Zero. Ziltch.”
You don't hear these too often, but you hear them often enough to worry. Because word spreads, and soon enough the whole industry is being tarred with the same brush: Direct marketing doesn't work. Lead generation is BS. Cold calling is the only way.
It's a mystery if something, anything, doesn't work at all. Probability loathes unmitigated disasters.
Let's think of an financial services company that agrees to pay, say, Rs 100 per lead. No sales manager would agree to such an amount unless he's quite sure that he will be able to convert a substantial number of these leads. We don't have to stretch credibility to envisage a binomial distribution with p = 10%, that is, there is a 10% probability that a lead randomly picked from the bought list would convert.
With such a probability what should one expect from a list of 2,500 leads, where a lead is defined as someone who explicitly expresses interest in a particular product of a particular brand by filling a form, and asking the company to get in touch with him?
The number of trials and mean are large enough to apply the normal approximation. And this says that there's a 99% chance that one should make between 288 and 211 converts.
The probability of making less than a hundred conversions is... zero, negligible.
“10% is too high,” you'd say. Let's try 5% for argument's sake (remember, it's Rs 100 a lead).
Even after halving the probability of success, we retain a 95% chance of making 103 to 146 sales. The probability of converts staying within double figures is 1.09%. Again, negligible.
So what do we tell the sales manger when he complains that leads were all duds? Logically, you should tell him that his lead management system doesn't exist: It's a wonder that his company does.
In real life, you bow your head and watch him renegotiate the rate, reducing it by 99%. Because theory be damned, he's god.
Numerology
Even with the qualifier, I abhor this type of data. The more I think about it, the more harm such 'research findings' seem to have done to direct marketing. No summery is provided, and no source is sited. We don't know how many tests were carried out, across how many brands, in how many product categories. The author is silent about the range of response rates. He doesn't, of course, tell us how much the responses fell by.
One wonders why he quoted the figure at all, except to give an illiterate client to impose a counterproductive and baseless restriction on work. Or enable an equally illiterate agency person to fill the auditory vacuum during a meeting with numerical - numerological? - basalt.
Monday, 21 January 2008
33% better than random
Every book on the application of statistics in database gives examples of how their model overtook a random sample by so many percentages.
Tuesday, 20 November 2007
Let’s take a random sample and see
Why do you want to take a random sample? An engineer makes his entire plan on a computer first, and subjects it to every sort of test, even when each element in it has known properties. He makes scale models, and makes corrections and tests every step of the way, till he’s absolutely sure. Then puts the structure into use very gradually. Ditto for pharmaceuticals. Yet direct marketing must hold up to testing on a random sample!
When any idiot will tell you that a test on a random sample is bound to fail, because most people in the list are bound to say No. For starters, many will never buy what you are selling. And even the ones who will are very unlikely to all be in the market exactly when you’re testing.
Which means even if a section gave a ‘thumbs up’ to the test, it’s bound to be swamped by the overall and overwhelming failure that permeates through the list as a whole?
Does that mean we shouldn’t take random samples? Of course, we should. But after understanding what the term means. We should take a random sample of those sections which we think are most likely to form the market; take enough from each of these to yield dependable numbers; and a random sample of the balance, to see how well your selection works out in real life.
Tuesday, 30 October 2007
No beta, yes risk
In direct marketing texts very little is discussed about sample sizes, and even less about Type I and Type II errors. Admittedly, the latter is somewhat complicated, and business statistics books often recommend that readers go over sections covering it carefully and repeatedly.
‘Complicated’ is, unfortunately, not synonymous with ‘of theoretical value only’. In fact, the opposite is true in this case, because the Type II error is of fundamental importance, as becomes apparent if we step back and ask why we bother about test and sample size in the first place.
We do so, basically, for two reasons. First, we don’t want to throw the baby out with the bathwater. We don’t want to take a test result that is somewhat less than our expectation on its face value. We’d rather use the test to estimate the list characteristic leading to the result, and if that turns out to be acceptable, we’d like to scale up. This is why we worry about a (the probability of rejecting a true null hypothesis).
At the same time, we don’t want to lose money by scaling up when we shouldn’t have. This means we should worry about the minimum response that would be ok. For this we must worry about b (the probability of accepting a false null hypothesis).
Yet, the commonly used formula for sample size completely ignores beta (not only in direct marketing books and online calculators but also in most of the business statistics texts I’ve come across)!
The formula goes like this:
N = | za/22p(1-p) |
E2 |
Where
z is the value used for the specified confidence level
p is the estimated response (population proportion) and
E is the ± sampling error allowed.
Let’s say estimated response is 2%; the allowed error is ±0.25%; and confidence level is 90%.
Putting these into the template yields a sample size of 8,485.
Let’s see what this means in terms of b if the true response (which we’ll get to know only if we scale up to the entire list) is, say, 1.65%. b turns out to be 23%, that is, 1 in 4!
Humm, that’s bad. There’s a 1 in 4 chance that while accepting a figure between 1.75% and 2.25% as ‘as good as’ a 2% response rate, we’ll actually accept a list with only 1.65% response.
No wonder one of the well accepted rules of the thumb in direct marketing goes: “As a rule, the response rate from a rollout to the balance of the list after a successful test mailing will usually be lower than the response from the test.”
This may be because of a variety of differences between test and rollout conditions. But one thing needs to be kept in mind: If one ignores Type II errors, the success of the test can be very suspect indeed.
A way out could be to use an alternative formula:
N = ( | |z0|(p0(1-p0))1/2 + |z1|(p1(1-p1))1/2 | )2 |
p0 – p1 |
Where
p0 is the estimated response
p1 is the value for which Type II error will be monitored
z0 is za or za/2 depending on whether the test is one- or two-tailed and
z1 is zb where b is the limit on type II error probability when p = p1.
Let’s see what happens if estimated response is 2%, the allowed error is ±0.25%, and confidence level is 90% (as before); while p1 is 1.65% and b is 10% (i.e., there is only a 1 in 10 chance – not 1 in 4 as earlier – that we’ll accept a list with a real response rate of 1.65% by performing the test).
We get a sample size of 12,643.
Sure, it’s a 50% increase in sample size. But it may be well worth it if the roll out numbers and costs are far higher than the test’s.
In any case, won’t decisions be better if they were taken with a clearer idea of the risks?
PS: Please excuse the ungainly appearance of the formula. It's the best I could manage using a Word file. Both formulas can be found in the useful templates at http://highered.mcgraw-hill.com/sites/0070620164/student_view0/excel_templates.html
Thursday, 4 October 2007
Respond and be damned
My colleagues got a very fancy invitation from HP about their new cost-effective technology. Beautiful graphics, amazing personalisation.
Wednesday, 11 July 2007
‘It needs to cut the clutter’
I can’t remember the last time I got a snail mail solicitation, for anything. I get, at an average, two tele-sales calls a week. From the time when I switched to a post-paid subscription, almost the only sales SMS I get is from my service provider, trying to sell me tunes. (Which I never buy. Why they don’t stop trying? A mystery.)
I do get piles of spam in my inbox, but let’s ignore that for a moment (Why? Because spam is so easy to ignore and chuck out).
Fact is, clutter is not the problem at all. If you want proof, just call a relative or friend in the US or UK and ask how much direct marketing material he gets every day.
The points I’m trying to make here are: (a) We should quit bothering about clutter and (b) maybe all that direct marketing needs to do to get attention here is exist.
Friday, 25 May 2007
Social Security
First, this assumes that every marketer asks for the social security number while processing every transaction.
Second, it assumes that these numbers are readily and accurately given.
Third, this assumes that the American government is so efficient that it eliminates all false and duplicate numbers.
What is the truth?
Tuesday, 17 April 2007
The economics of outsourcing
If Indians are to supply creative or analytics services to the West, they must do so at a small fraction of Western rates. Otherwise, the work wouldn’t exist. There would be no cost advantage. Sounds logical.
Worshipping response
Every direct marketing book and website venerates response rate and rightly so. Yet response is to direct marketing what score is to cricket. The real learning is in the journey to response: A mere reading of scores tells next to nothing.
Thankfully, there are a few first-class case studies that include sufficient details. Unfortunately, they are vastly outnumbered by case-lets that reveal little but damage much.
Monday, 9 April 2007
Kill the pilot
You start haggling. They agree to slave rates. Once the relationship is established… the whore will become the wife.
You’ll run a pilot, on profit-sharing basis. Even as your company pays a multinational consultant millions for advice you’ll never use. They must, if they believe that direct marketing can produce results, share the risk. Never mind your share, as a fraction of your marketing spend, is a tiny fraction of their share, as a fraction of their turnover.
The pilot ‘fails’. The agency insists it’ll succeed if it’s scaled up. You know better. Direct marketing cannot work for your industry. Why leave it at that? No, let’s go a step further. Direct marketing doesn’t work in India. Because nobody reads.
If only the couples of this country piloted sex as marketers pilot direct marketing, our population would have been a tiny fraction of what it is today.