Overview
Split testing (or A/B testing) is a method of website interface testing, in order to find out which website version achieves the largest profit.
Website interface is a visual outer form of a website displayed on a screen of a computer, laptop, tablet, etc. Moreover, website interface it's not only graphic design but also operations that a user perform while interacting with a website.
The concept of split testing
You can create multiple versions of page elements or even entire website themes and test them against each other, in order to find out their effectiveness. When testing, different users are shown different versions of a web page.
When a user takes a desired action on a test web page, the information about this test page transfers to the test result table.
Roistat solution of split testing
Split-testing is usually based on conversion rate. This tactics may mislead you to wrong conclusions and poor decisions.
Most services offer the split-testing concept based on conversion rate. However, we want to remind you that the number of leads is not equal to the number of sales.
Therefore, we offer split-testing based on profit. It will help you draw the right conclusions and make the best decisions.
Roistat offers 2 split testing methods:
With style tests, you can change only visual effects. Style tests are functionally limited but easy to set up.
Programmable tests have more functionality because much more types of changes can be produced by the code. Moreover, programmable tests are more reliable as they run on the server.
Use style tests to experiment with simple visual changes. If you wish to test visual effects and some actions the users can complete on a website, for example, etc., you should run programmable tests. |
To create a style test, open the Split Testing page and click the Create style test button. The creation page opens.
To cancel the creation, you should go to any other page. |
1.Type the name of a test in the Name field:
2. Add at least one test variant in the Variant section.
To do this, type a CSS code in the input field on the Variant 1 tab. When you place the cursor into the field, a CSS hint shows up.
The original variant corresponds to the Source variant page. You cannot edit or remove it.
To add more test variants, click .
To change the test name or remove it, click the down arrow and select the corresponding option from the drop-down list.
To restore the tab that you removed, click .
You can enlarge the input field: click on its lower right corner and drag.
3. You can preview web pages with test variants.
To get a preview, click on the name of the variant you want to preview, enter the web page URL in the Preview this variant field and click the Preview button:
4. It's necessary to specify the web pages to show your test variants. Scroll down the page and in the Pages to run test section, click the corresponding button:
When you place the cursor into the field, a hint shows up. |
5. After setting up the test, you can save it for later use by clicking Create a new test, or save and launch it immediately by clicking Create and run:
To create programmable tests, click Create new programmable test on the Split testing page. You'll be then transferred to the instruction page:
On opening the Split Testing page, you will see the AB Tests list.
The tests are grouped in two categories according to their statuses:
The number of tests in a category is displayed next to its name.
Each test is listed in a separate table row.
The table shows the following data:
On the Tests in progress tab you can manage your tests. Here you can:
On the Tests in archive tab you can manage your tests. Here you can:
Click the test name to view the associated data:
On opening a test results page, you'll find:
On the Test results tab you can manage your tests. Here you can:
1. launch, if the test hasn't been started yet, by clicking Action → Run test:
2. edit by clicking Action → Edit:
You can create a copy of such test by clicking Clone on the settings page. A cloned test can be modified as you wish:
A cloned test can be saved as a CSS test only. |
3. stop by clicking Action → Stop:
4. restore, if the test was stopped, by clicking Action → Continue;
5. archive if the test was stopped, by clicking Action → Archive:
The metrics for the test are displayed in the table below.
The test report is not loaded unless at least one visit is registered. In this case, you'll see a system message: |
The Test results page shows sales statistics. In the report, the sales are arranged into groups according to the test variants. E.g., here several font sizes are being tested: 12px, 14px, and 20px:
Expand sections of the report to view detail data. The report for each variant is similar to the Analytics report:
To view the deals list, expand all the sections and press the corresponding keyword set:
To view deals detail information, press the corresponding deal row:
The table contains data for the following metrics:
As you hover over a metric name, a pop-up shows up to remind you the definition:
You can manage the report data range:
1. Change time intervals for the report. To do this, click a button over the table: calendar, Today, Yesterday, Week, Month, 3 months:
2. Choose variants, ad channels, campaigns, keywords, or sales, that you wish to track separately. To do this, check the boxes next to the row names you wish to track and look at the Total/Average row. The total statistics is the default view, although the boxes aren't checked:
Data can be arranged by any metric by clicking on a metric name. By default, data is ordered by leads in a descending order.
There are two key metrics that measure the probability of one variant to outperform the others. They are: CBA is the probability based on visit-to-lead conversion. It's used if the sales cycles are long, e.g., the sale occurs in several months since the first visit; CBA+ is the probability based on profit. It's used in most cases, as this metric is based on profit and reflects the test variants profitability. |
If there is one variant that is clearly outperforming the others and the gap is huge, it means that the test reached its confidence interval and the current winner can hardly lose. This is a signal for you to stop the test: you've got the variant that will definitely win and bring you much more profit.