Loading...
Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Overview

Split testing (or A/B testing) is a method of website interface testing, in order to find out which website version achieves the largest profit.

Website interface is a visual outer form of a website displayed on a screen of a computer, laptop, tablet, etc. Moreover, website interface it's not only graphic design but also operations that a user perform while interacting with a website.


The concept of split testing

You can create multiple versions of page elements or even entire website themes and test them against each other, in order to find out their effectiveness. When testing, different users are shown different versions of a web page.

When a user takes a desired action on a test web page, the information about this test page transfers to the test result table.


Roistat solution of split testing

Split-testing is usually based on conversion rate. This tactics may mislead you to wrong conclusions and poor decisions.

Most services offer the split-testing concept based on conversion rate. However, we want to remind you that the number of leads is not equal to the number of sales.

Therefore, we offer split-testing based on profit. It will help you draw the right conclusions and make the best decisions.

How to split test

Step 1. Choose the testing method

Roistat offers 2 split testing methods:

  • within Roistat interface with the help of CSS – style tests;
  • server-side tests based on PHP – programmable tests.

With style tests, you can change only visual effects. Style tests are functionally limited but easy to set up.

Programmable tests have more functionality because much more types of changes can be produced by the code. Moreover, programmable tests are more reliable as they run on the server.

 

Tip
titleSUMMARY

Use style tests to experiment with simple visual changes.

 If you wish to test visual effects and some actions the users can complete on a website, for example, etc., you should run programmable tests.

Step 2. Configure tests

Style tests

To create a style test, open the Split Testing page and click the Create style test button. The creation page opens.

Note
titleNOTE

To cancel the creation, you should go to any other page.

1.Type the name of a test in the Name field:

Image Added

2. Add at least one test variant in the Variant section.

To do this, type a CSS code in the input field on the Variant 1 tab. When you place the cursor into the field, a CSS hint shows up.

The original variant corresponds to the Source variant page. You cannot edit or remove it.

To add more test variants, click Image Added.

To change the test name or remove it, click the down arrow Image Added and select the corresponding option from the drop-down list.

To restore the tab that you removed, click Image Added.

You can enlarge the input field: click on its lower right corner and drag.

Image Added

3. You can preview web pages with test variants.

To get a preview, click on the name of the variant you want to preview, enter the web page URL in the Preview this variant field and click the Preview button:

Image Added

4. It's necessary to specify the web pages to show your test variants. Scroll down the page and in the Pages to run test section, click the corresponding button:

  • Whole website – all pages of a website are in test;
  • Whole website excluding pages – specify the pages that you wish to exclude. Each URL starts on a separate line;
  • Selected pages only – specify the page on your website that you wish to test. Each URL starts on a separate line.

 

Tip
titleTIP

When you place the cursor into the field, a hint shows up.

Image Added

5. After setting up the test, you can save it for later use by clicking Create a new test, or save and launch it immediately by clicking Create and run:

Image Added

Programmable tests

To create programmable tests, click Create new programmable test on the Split testing page. You'll be then transferred to the instruction page:

Image Added

Step 3. Managing the tests

On opening the Split Testing page, you will see the AB Tests list.

The tests are grouped in two categories according to their statuses:

  • "Tests in progress";
  • "Tests in archive".

The number of tests in a category is displayed next to its name.

Image Added

Each test is listed in a separate table row.

The table shows the following data:

  • Test name – the title of a test;
  • Test type – one of 2 test types: "CSS" or "Programmable";
  • Start date – the start date and time accurate within a second;
  • End date – the date and time when the test stopped or paused, accurate within a second;
  • Duration – the duration of a test (in seconds, minutes, days, etc.);
  • Visit count – the number of visits for a test page/website.


On the Tests in progress tab you can manage your tests. Here you can:

  • launch, if it's a CSS test, by clicking Run test;
  • stop by clicking Stop;
  • resume by clicking Continue;
  • archive a running test by clicking Archive;
  • edit a CSS test by clicking Image Added.

Image Added

On the Tests in archive tab you can manage your tests. Here you can:

  • restore by clicking Recover – the test will appear on the Tests in progress tab after clicking on the tab's name;
  • edit a CSS test by clicking Image Added.

Image Added

Step 4. Viewing and evaluating the results

Click the test name to view the associated data:

Image Added

On opening a test results page, you'll find:

  • the test name, its type and status;
  • Period – test duration in seconds, minutes, days, etc. and the starting and ending dates;
  • Visits for this test – the total number of visits to the test pages since the beginning.


On the Test results tab you can manage your tests. Here you can:

 1. launch, if the test hasn't been started yet, by clicking Action → Run test:

 Image Added

2. edit by clicking Action → Image AddedEdit:

    • you can edit only style tests that haven't been started yet – the settings page will open then:

Image Added

    • you cannot edit a running or stopped test.

You can create a copy of such test by clicking Clone on the settings page. A cloned test can be modified as you wish:

Image Added

 

Warning
titleWARNING

A cloned test can be saved as a CSS test only.

3. stop by clicking Action → Stop:

Image Added

4. restore, if the test was stopped, by clicking Action → Continue;

5. archive if the test was stopped, by clicking Action → Archive:

Image Added

The metrics for the test are displayed in the table below.

 

Note
titleNOTE

The test report is not loaded unless at least one visit is registered. In this case, you'll see a system message:

Image Added

The Test results page shows sales statistics. In the report, the sales are arranged into groups according to the test variants. E.g., here several font sizes are being tested: 12px14px, and 20px:

Image Added

Expand sections of the report to view detail data. The report for each variant is similar to the Analytics report:

Image Added

To view the deals list, expand all the sections and press the corresponding keyword set:

Image Added

To view deals detail information, press the corresponding deal row:

Image Added

The table contains data for the following metrics:

  • Visits;
  • Leads conversion;
  • Leads;
  • Sales conversion;
  • Sales;
  • Revenue;
  • Average revenue;
  • Cost;
  • CPO;
  • ROI;
  • Variants additional value (in rubles);
  • CBA;
  • CBA+.

As you hover over a metric name, a pop-up shows up to remind you the definition:

Image Added

You can manage the report data range:

1. Change time intervals for the report. To do this, click a button over the table: calendarTodayYesterdayWeekMonth3 months:

Image Added

2. Choose variants, ad channels, campaigns, keywords, or sales, that you wish to track separately. To do this, check the boxes next to the row names you wish to track and look at the Total/Average row. The total statistics is the default view, although the boxes aren't checked:

Image Added

Data can be arranged by any metric by clicking on a metric name. By default, data is ordered by leads in a descending order.

Warning
titleWARNING

There are two key metrics that measure the probability of one variant to outperform the others. They are:

CBA is the probability based on visit-to-lead conversion. It's used if the sales cycles are long, e.g., the sale occurs in several months since the first visit;

CBA+ is the probability based on profit. It's used in most cases, as this metric is based on profit and reflects the test variants profitability.

If there is one variant that is clearly outperforming the others and the gap is huge, it means that the test reached its confidence interval and the current winner can hardly lose. This is a signal for you to stop the test: you've got the variant that will definitely win and bring you much more profit.


Table of Contents