You can hire my services

I am Ben Lang an independent web conversion specialist with over 20 years of experience in IT and Digital and 12 years Conversion Rate Optimization (CRO) know-how. I provide a full analysis of your website conversion performance and the execution of tried and tested CRO optimization exercises through AB testing, split testing or MVT (Multivariate testing ) deployed to fix your online conversion issues. Contact me at https://www.benlang.co.uk/ for a day rate or catch up with me on LinkedIn
Showing posts with label culling. Show all posts
Showing posts with label culling. Show all posts

Culling multivariate test variants in a Maxymiser test

I've covered the topic of culling before on this blog here. Now I'll go through my method of identifying test variants to cull from a running MVT test in Maxymiser where you have multiple actions.

1. Below is a screengrab of an MVT test report in Maxymiser after a test has run for a week. On the left side are the bottom ranking variants for a 'click apply button' action and on the right are the bottom ranking variants for a 'submit application' action.


2. The conversion rate uplift is negative for these variants and are not adding anything to the test overall and so need to be removed or 'culled'.

3. Indentify page combinations (see Page ID field above) that are both negative in uplift and appear in the bottom ranking across both actions then select the 'remove page' option within the console.

4.Looking at the test report before and after the cull identifies the immediate effect on the 'chance to beat all' metric within the test.

The chance to beat all value moves from 27.86 % for the lead variant to 28.11%, this uplift shift also cascades downwards through the other variants left in the test.

This culling exercise would then be repeated at periodic intervals for the remainder of the test.

note: Caution should be taken when removing variants from the test. The number of generations and actions should be taken into consideration, whilst over-culling a test can bring it to an early and unproductive conclusion.

Page Combination Removal Feature in Maxymiser

We use Maxymiser as our Multivariate/AB testing tool of choice. Maxymiser allows you to see how individual test variants are performing in it's console. In a previous post about culling variants I talked about how we like to actively remove under-performing variants from our tests. Well this has been a contentious issue, whether or not it's the right thing to do in a test scenario (see post for full discussion). However Maxymiser have recently added a new feature to their test console that allows you to remove under-performing page combinations* from your test. Doing this also allows you to immediately and clearly see the impact of performing such an action and its immediate effect on the remaining page combinations.

Below is a screenshot of our Maxymiser console displaying an active test before we remove an underperforming page combination. P8 page combination is highlighted as an under-performing page combination. It has an uplift of minus 1.76% and a 'Chance to Beat All' value of 25.23%.



Now after we exclude the P8 page combination from the test you can see what the results look like. The lead page combination P2 leaps from a 39% 'Chance to Beat All' to 51%, hence speeding up your test. The overall uplift value also moves from 3.19% to 3.59%.



So the introduction of the page exclusion feature actually allows you to experiment more with 'What If' scenarios. We like to think that this enhancement was made to Maxymiser as a direct response to our need but it's probably not the case!


* A page combination is a collection of multivariate test variations

Culling your test variants


As an Optimization team, we are new to the whole multivariate testing business. We are more than aware that we dont always do things by the book. So when we undertake a multivariate test we dont always adhere to the basic principles of testing.

The biggest rule we tend to ignore is 'Do not tamper with your test'. The trouble is we always have to have an eye on the bottom line. So we are constantly asking ourselves whether what we're undertaking in terms of testing is not impacting upon our sales in a negative way? Are we actually reducing the conversion rate on the website?
My colleague generally monitors what's going on downstream during a test and looks at the basic application submittion rates for our products. If he notices a downturn in conversion rate during the course of a test we get a bit nervous.

Thankfully our multivariate testing tool, Maxymiser allows us to look at how individual variants are performing. If after a period (usually around one week into a test in our case) we start to see a downturn we'll start to examine closely which variants we can 'cull' from the test.
Once we highlight the under-performers we then downweight* them out of the test entirely. This is beneficial for two reasons:

1. You minimise negative impact on conversion and sales.

2. You reduce the number of page combinations in the test.

The lower the number of page combinations the quicker your test period. This is great for us because of the second rule of testing that we frequently ignore 'Allocate enough time for testing'. Basically speaking we run tests for a far shorter period than is recommended.
Most tests, given enough visitor traffic to your site run anywhere from 4 to 10 weeks, or even longer. We tend to have ran tests from 2 to 6 weeks. Our excuse for this is that there is so much other activity going on on the website at any given time by other people that we have a very narrow window in which to test and get a result.

Another key thing when planning your MVT test is knowing how much traffic you get to your site and whether you've got enough traffic to run all your page combinations and see an outright winner at the end of your test. So far we've been reasonably lucky in that we've had enough traffic to run the tests for a relatively short period and still acheive a winner.

Obviously ignoring key testing rules and principles is not recommended. But if lke our Optimzation team, you're stuck between a rock and a hard place, and there's a certain need to get some kind of testing done our early experiences have shown that you can bend the rules to get some kind of learning or outcome in a short space of time.
* In multivariate testing, each variant is usually allocated a weighting. For example, if you give a variant a 50 weighting in the test console it will be served 50% of the time, while the default content is served the other 50% of the time.