I like this rule of thumb a lot. It's very straight-forward and easy to remember so you can apply it quickly to see significance and then use a more rigorous method if necessary.
But what about the vast majority of people who don't click either ad? That's the "ad impressions" that didn't lead to a click. Shouldn't those count somehow in the statistics?
No, they shouldn't; those are "mistrials."
I don't think those are mistrials. Clicks vs impressions is a separate test and should be treated as such.
The test only counts if an even amount of impressions is given. The viewers aren't given a choice between ad A and B, they have a choice between A (or B) and nothing. I don't think the author explained this well enough. Obviously (to everyone here) the test is inconclusive if one ad is shown much more than the other.
The statistics are a little bit off - the Chi-squared isn't appropriate for the example he gives (sample size too small), although it would be fine in practice. What you really want in that situation is the exact binomial test.
Fair point, and yes it's quite informative from that point of view. I didn't like his approach however, he seemed to be beating around the bush a lot with phrases like
I'm here to rescue you with a statistically sound yet incredibly simple formula
than giving a more honest 'hey, what you need is Standard Deviation, and here's how it works!'
Seems I'm haemorrhaging karma over this. I misjudged the method used (Pearson's Chi Square, as pointed out by hn user pibefision). Fine, that'll teach me to skim read and comment.
My point above was that the author could have come out and labelled it as such in the article.