You can hire my services

I am Ben Lang an independent web conversion specialist with over 20 years of experience in IT and Digital and 12 years Conversion Rate Optimization (CRO) know-how. I provide a full analysis of your website conversion performance and the execution of tried and tested CRO optimization exercises through AB testing, split testing or MVT (Multivariate testing ) deployed to fix your online conversion issues. Contact me at https://www.benlang.co.uk/ for a day rate or catch up with me on LinkedIn

Tracking uplift post ab testing


Once you've ran either an A/B test or a Multivariate (MVT) test on your website and you're told by your testing tool of choice that your test has achieved statistical conclusion how do you continue to measure performance, and more importantly, should you continue to monitor performance?


I know it sounds reckless, to continue to track uplift is the responsible thing to do right? But in reality there's an important aspect of optimisation testing that needs to be taken into consideration here. Testing outcome is usually the result of the following variables:


product benefit + moment in time + market position + customer experience


Can you continue to accurately measure and account for each of these factors after you've conducted a test? The honest answer is probably no. Also I don't know many people with the personal bandwidth to monitor every single test once it's finished. If I wanted to double my workload I certainly would!


The important thing to do is ensure is that your test has been given enough testing time and traffic volume in the first place before you conclude the test as this post reasonably states Don't Fool Yourself with A/B Testing. If you've done that you should have a reasonable level of confidence in it's future performance.


If you still want to track uplift after testing I would suggest the following are available options for you:


1. Set up a Google Analytics Goal. This gives you the ability to track the performance of a specific customer journey within your normal web analytic. Yes you have to use Google for this one, but any web metrics tool worth it's salt will have the same functionality.


2. Leave your test running. This to me is the fail-safe option. Once you have a test winner up-weight this in favour of the default content but leave a small percentage of your traffic going to the default as a benchmark for continued performance. I usually leave 5% going to the default for a period of time where possible to ensure I've made the right decision. 


3. Run a Follow-up experiment. This is a great feature in Google Optimizer but you can do the same in any other testing tool if you have the resource to do it and there's lingering doubt about the original test outcome.


4. Bespoke tracking. On the pages I optimise I append tracking values to the application form that when submitted to a sales database can be used to tie back sales to a specific landing page. Using this approach you can monitor conversion rate performance before, during and after a test. I cant recommend this approach enough and is entirely dependent upon your on-line application forms particular design as to whether you can implement it.


That's about it really. If I think of any other methods for ongoing tracking I'll add them in.


Happy testing!

1st Direct Bank throw it out there

First Direct have recently launched First Direct Labs. This is a new section of their site which shows you what new ideas and concepts they are testing for their on-line experience. Current tests being; QR Code functionality, redesign concepts, and mobile apps.Visitors are encouraged to rate these concepts and designs as well as make suggestions of their own for new tests. The missing link here is that this feedback is obviously a channel for qualitative testing of these ideas and for my mind seems to be their only current means of gauging how effective these new ideas will be. Obviously I'm not privy to their whole web testing strategy but I would hope there is more in their testing toolbox than just this focus group approach. Either way First Direct are to be commended for divulging part if not all of their testing strategy, it's a safe thing to do as experience has shown that competitors can rarely benefit from implementing test finding vicariously without first doing their own comprehensive testing. I'll address vicarious testing in greater detail in a later post.

Continuing with my ongoing retrospective theme it's worth pointing out that I haven't been averse to going public with test ideas and designs in the past as seen in this post from 2009 where I ask the general public to rate our page designs following a non-conclusive round of MVT testing.

Testing During a Traffic Spike revisited





In a post I wrote back in 2009 I talked of the benefits of combining MVT testing with campaign activity in a post called Riding the tsunami. Coming late to the party, but nonetheless getting there in the end are Get Elastic ,a very good web optimisation site with some valuable testing ideas and concepts, endorsing this very same approach after a bit of soul searching with an article titled "Should You Avoid Testing During a Traffic Spike?" Definitely worth a read. 


I think fundamentally the message remains the same, it's okay to be MVT testing during a campaign if you're trying to optimize that campaign and not the long-term web experience of your visitor.  As ever testing results are usually as a result of the following variables:


 product benefit + moment in time + market position + customer experience

The law of diminishing returns



Once you've optimised a page using either multi variant testing (MVT) and/or split testing (A/B Testing) and managed to achieve a respectable uplift in sales conversion, when it comes to revisiting that page with further testing, likely as not you're entering the realm of Diminishing Returns.

This is historically an economics term but it also applies to web optimisation testing. This is where subsequent testing or optimisation activities prove to be less rewarding in terms of finding web content that works than the original or earlier rounds of testing.

This has been recently illustrated last week when a colleague produced a new version of an optimised landing page. He wisely wanted to test that it could perform as well as the existing page or even better. The original page was the result of several rounds of previous optimisation testing and was already proving to be very good at converting visitors to submit an on-line application. The image below sees the ongoing split test as conducted in Google Website Optimizer . The original or default page is proving hard to beat, the new page (variant 1) is bettering the original but it's not pulling away in a massive uplift as you might see in the first or second rounds of testing.


It's important to realise that while you should always look to be frequently testing your pages, previously tested or otherwise, as part of a continued programme of testing, we should realise that the big headline results of earlier testing should start to decline test on test . This is a sign of successful testing, indicative that you're starting to get things right from the visitor conversion perspective.

As a very rough guide I would say the following is true for a successful roadmap of testing; Let's call it the ARSSS  approach (sorry I'm such a child!):
  1. Analyse your site metrics, establish user journeys, understand what's going on.
  2. Rationalise your site. Remove unnecessary  pages and clicks. Remove obvious leakage points in your sales funnel.
  3. Start MVT testing. Use this to get under the skin of the user experience. Do as many rounds of testing as it takes to answer your questions and hopefully start to improve your conversion. In essense you're starting to narrow and hone your sales funnel.
  4. Start split testing. Once you know what works on a page element by element basis through MVT you can start to use A/B testing to start look & feel testing entire pages.
  5. Segment Users. Once you've done all of the above start to get into User Segmentation, ie. start to group your customers into segments based on behaviour (I'll be writing a more in-depth post on this in the future).
Happy testing!