13 Feb 12
23 Nov 12 1:03 am
The last time I used Google for split testing, it was called "Google Webpage Optimizer". I'm sure it hasn't changed much. And they walk you through it nicely (just like with Google Analytics and their other services.
Basically you add some Google copy/paste script into the header of your "A" and "B" HTML pages, then launch your test. Then Google starts gathering statistics and building your report. They warn you if you haven't had enough traffic for the test to reach 95% confidence level (or whatever percent). They show you results along the way - but warn you (correctly) not to base any decisions on early results (before you reach the statistical confidence level you need).
It does set a cookie on the visitor's computer, so that they always see the same page version when they come back to your page (and you'll also only see one version.
Things get more complex if you try to do "mufti-variate" tests ("A/B/C"), or change too many things on a page (such that you don't know which change was what caused the result).
My simple A/B split tests usually require a minimum of about 300+ samples to reach statistical significance. If you are testing against sales, that means 300 SALES - which can represent a lot of traffic (and make testing very slow).
Because that makes it impossible to split test new sites with low traffic, I sometimes use AdWords PPC during split tests and also sometimes use a "proxy" visitor action as an "indicator of buyer intent" - like if they used a link from the landing page to a "just-before-the-sale" info page (like a list of my info product chapters) or some such.
Hope this helps...