Blog Post Labels
ab testing (73) multivariate testing (68) conversion rate optimization (67) quick wins (64) Google experiments (10) Maxymiser (7) VWO (4) check-out (4) Seasonality (3) Visual Website Optimizer (3) campaign (3) culling (3) mvt (3) split testing (3) A/B testing (2) CRO (2) Google (2) Google Analytics (2) Google Trends (2) Gutenberg (2) HiPPO (2) MVT testing (2) Magazines (2) SEO (2) Statistical Significance (2) Twitter (2) amazon (2) basket (2) calculating uplift (2) compare (2) mobile (2) page fold (2) wilkinsonplus.com (2) $1 (1) 99p (1) Aviva (1) BlinkBox (1) Digital Content Summit 2015 (1) Diminishing Returns (1) Eye tracking (1) Increasing average basket value (1) Keywords (1) Map Overlay (1) Native app (1) Netflix (1) Omniture (1) Optimizely (1) Page rating (1) Personization (1) Testimonials (1) VCB (1) Viking (1) Website redesign (1) analysis (1) browser size tool (1) checkout (1) conversion rate optimisation (1) cross-selling (1) data alchemy (1) discount code (1) facebook (1) fatwire (1) google browser size tool (1) heatmaps (1) iOS (1) iTunes (1) insights (1) landing page (1) marketing (1) net.finance 2011 (1) neuromarketing (1) non-financial KPIs (1) paywall (1) popular posts (1) presentation (1) promo code (1) reptilian brain (1) retail (1) segments (1) session capture (1) session recording (1) short wave testing (1) whereisthefold.com (1) £1 (1)
Thursday, 26 March 2009
What is Statistical Significance?
I've sort of overlooked this topic since establishing this blog but for subject completeness shall we say, I think I should now mention the role of statistical significance in optimisation testing.
One of the biggest headaches to running an AB test or Multivariate test on your website is knowing when your test is complete, or heading towards conclusion at least. Essentially how do you determine signal from noise?
Many 3rd party tools give you the metrics to determine a tests conclusiveness, for example the Maxymiser testing tool displays a 'Chance to beat all' metric for each page combination or test variant within your test.
But more importantly, what underpins these tests is the concept of statistical significance. Essentially a test result is deemed significant if it is unlikely to have occurred through pure chance. A statistically significant difference means that there is statistical evidence that there is indeed a difference.
Establishing statistical significance between two sets of results allows us to be confident that we have results that can be relied upon.
As an example, you have an AB test that has two different page designs. Analysing the data shows there are two results:
Page 1 - 1,529 generations with 118 responses or actions - giving a conversion rate of 7.72%.
Page 2 - 1,434 generations with 106 responses or actions - giving a conversion rate of 7.39%.
Looking at the two results which do you think is the better? Is page 1 better because it as a higher conversion rate that page 2? Using statistics and firing those 2 results through a basic Statistical Significance calculator (I'm using this one Google's Optimizer test duration calculator) tells us that the two results are 0.335218 standard deviations apart and are therefore not statistically significant. This suggests that it is highly likely that it is noise causing the difference in conversion rates, so plough on with your testing. If a 95% statistical significance is acheived you can safely say the test is onclusive with a clear winner. This is also indicative of a strong signal and gives you a result based upon a wholly statistical basis as opposed to human interpretation.