I like that this person challenges aspects of online marketing. The value of the blog post is zero the movement you state a/a testing a/b testing "doesn't matter". A/B testing always matters. Its not even an opinion, A/B testing is the root of scientific method - that fact is an axiom that needs no evidence of proof or disproof - it just "is". Now if you have a marketing tool that is outputting the unexpected, then take it up with the marketing tool. Even Optimizely recently stated they updated the way they output their results to reduce false positives.
If nothing else, this succinctly describes how to vet decision processes based on statistical tests.
I ran into a situation within my own organization where costly decisions were being made on variations in statistical test results that I suspected were local minima/maxima in natural variance. Replaying the decision process against the same target multiple times provided strong evidence towards changing how we did that.
It wasn't enough to just calculate the estimated population variance, etc., because explaining the significance of that was essentially a TL;DR problem to anyone whose opinion counted. It also required significantly better stats skills than I had, since adequately explaining the issue generally requires more expertise than recognizing it in the first place.
However, empirically demonstrating that the decision process was flipflopping on the same available information was extremely compelling.