John Caples’s Tested Advertising Methods is a book I will unhesitatingly recommend to any copywriter. As I will recommend his How to Make Your Advertising Make Money and Making Ads Pay.
So what I am about to write is, for me, sacrilege. Yet it must be committed.
The tests that Caples talks about leave a lot to be desired. I was just going through Tested Adverting Methods (5th edition, revised by Fred E. Hahn [a revision that does it only harm]) again, and found the inadequacy of the data particularly puzzling.
Take the famous example where Caples says changing an ad headline from “Repair Cars – quickly, easily, right” to “Fix Cars – quickly, easily, right” increased response by 20%. He gives no information about what the actual numbers or percentages were, or where the ad came out.
So let’s suspend belief for a while and pick some numbers out of the Web.
The average weekday circulation of a newspaper in the
Tested Advertising Methods came out in 1932, so here are our assumptions: (a) The circulation was not too different in 1932 (we have no reason to do that, but the data at the site is only till 1940) and (b) Readership was equal to circulation (again, a rather silly assumption, but the purpose of this to explore a possibility and not to prove a point).
The situation we can imagine goes like this:
The ‘repair’ ad could have pulled up to 201 responses
And the ‘fix’ ad 20% more, that is, 241 responses
Without the response rates being significantly different (at a 5% level of significance).
Had the ad come out on an average Sunday newspaper, the responses could have gone up to 209 and 250, respectively, without the response rates being significantly different.
The same complaint can be made against the comparison between “Save one gallon in every ten” and “Car owners! Save one gallon of gas in every ten” where, on testing in a daily newspaper, the latter pulled 20% better than the former.
Another famous example is the one where “Hay Fever” pulls 297 sample requests while “Dry Up Hay Fever” pulls 380, a ‘27% increase’. The increase in response rate (assuming the ads came out in average newspapers on weekdays) could have been between 0.15% and 0.61%.
In quite a few cases, neither response rates nor responses have been quoted; we’re simply told A did better than B.
Now, if the differences in response rates were not always significant, from either a statistical or business perspective, the businesses involved in those testing decisions would not have gained or lost much.
The trouble lies elsewhere, with direct marketing copywriters who believed the ‘tested’ fact that ‘straight and simple always out-pulls the creative’ and put their own careers into jeopardy, because that belief is almost always seen as an excuse for lack of talent.
To all such writers, and to writers who have not yet formed their beliefs, I would recommend this site: Statistics Every Writer Should Know. A little knowledge may be a dangerous thing, but none at all can be disastrous.PS: I used an Excel template from Aczel & Sounderpandian for my calculations. My calculations are at http://docs.google.com/Doc?id=dd3bjnd7_28rt37t and the templates are available here.