Well hang it this blog is all about being a testing 'Maverick' so here goes nothing....
First off, let's not get confused by Iterative Wave Testing as used by Optimost. I think I'm right in saying that's where you test the same variants over a sustained period in 'waves' of testing to ensure what you have is validated and statistically significant. All very worthy, good stuff.
What I've been experimenting with is trying a set of test variants in one brief wave of testing and then ditching or culling any negative or lesser performing variants in favor of an entirely new variant in a new wave of testing that sees the positive or successful variants carried forward from the last wave. The whole process is repeated for as many waves as it takes to get a robust set of variants that out-perform everything else pitted against them. The only qualifying criteria for a variant to be carried forward to the next wave of testing is that they either continue to outperform the original default design or better the performance of anything that has gone before them, i.e; anything that has been previously removed.
I hope this simple (ish) diagram illustrates how this short wave testing works. Below we have 4 test areas in a web page and we have 4 phases of testing. As we can see in Test Area 1, Variant A is successful enough never to be culled from the test and ultimately becomes the winner for Test Area 1. Test Area 2 shows an initially unsuccessful Variant A that is culled after the first phase of testing and replaced with a new variant B which goes on to be the winning variant of Test Area 2. Test Area 3 has a different story, in the end it takes 4 different variants over 4 phases of testing to find a variant that is positive enough to be declared a winner. And Test Area 4 arrives at a winner on the third phase of testing with variant C.
What I'm hoping for now is the counter-argument from my testing peers (drop me a line at firstname.lastname@example.org). I'm aware of the shortcomings of this approach but want others to have their say on this kind of testing methodology. Here's my bonfire, feel free to piddle all over it : ) Happy Testing!
UPDATE: One thing worth noting with this testing approach is that if it goes right your conversion rate for the test variants should improve for each wave where you attain, keep or build on positive performing variants but at the same time you will also see a diminishing uplift for each wave. This is because you are continually testing against improved and stronger performing variants in the test segment. Ultimately though you should still see a good uplift against the underlying original default design.