Optimization and Growth Part 5: Scientific Testing Process
Published: May 25, 2017
Author: Jeremy Epperson
This blog is Part 5 of a six-part series where we will walk through our Foundational Principles of Optimization and Growth, aimed at giving you the confidence you need to implement an optimization program across your organization. Each blog summarizes one of the principles; for a much more in-depth look at each, you can download our white paper where we provide additional examples, stories and detail to help get you started.
Part 5: Scientific Testing Process
In this blog, we’ll take you through each phase of the scientific methodology, from creating hypotheses through post-test analysis, and highlight what you need to take into consideration in order to avoid costly mistakes.
A strong hypothesis is based on a data point or observation. The key is to isolate a variable that allows you to prove or disprove your hypothesis with the results of a test. A common mistake is to include too many variables in your test, to the point where even if you get a significant lift it’s impossible to tell what exactly caused it. While winning is important, it is even more important to continue to learn and gain insights to drive future testing.
After gathering data points through research and transforming them into testable hypotheses, it is necessary to create a prioritized roadmap. This guides your testing and helps to focus your efforts on tests that have the highest probability of improving performance.
We use a proprietary framework based on five weighted factors that contribute to the overall quality of your test hypotheses.
- Business Alignment
- Statistical Significance
(A more detailed description for each of these factors can be found in our white paper. 3Q developed a Prioritization Framework, which is a proprietary process used to help rank tests at a more exact level than simply taking into account LoE and Impact. Contact us to find out more.)
The value of optimization comes from the iterative nature of testing. Even when you pinpoint the exact issue blocking conversions on the site, you won’t get lift if your implementation is off. You have to keep iterating to find the optimal solution.
The most commonly cited reason for lack of credibility in marketing organizations is the inability to produce measurable results. Optimization is built on the foundation of accurately measuring the impact of any changes you make to your site or your marketing campaigns. When following this rigorous scientific testing process, you will have a number of potentially misleading threats. The biggest threat to your optimization program is data pollution, or events, trends, or testing mistakes that skew the results and cause misinterpretations that are not detected. Keep a steady eye out because it can have a big impact on your results.
Testing & Statistics
One of the major benefits of optimization is that we use a scientific testing process that gives us precise statistical insight into the impact of the changes. Using this process, you can actually have a level of statistical confidence to help drive our recommendations for pushing changes live on your site. Without this it is very easy to misinterpret results. Here are two types of common errors:
- Type 1 Statistical Errors: False positives are “validated” results that lead you to mistakenly believe there is a change in conversions. For example, you could think you have a winning test, but in reality there is no improvement in performance. This is a major threat because pushing that variation live on your site could actually drop conversion rates and sales.
- Type 2 Statistical Errors: False negatives fail to reveal the true change in conversions – in other words, a test does not show that there is a lift in conversions when it does in fact exist.
In the culture blog, when we talked about the skill sets needed on your team, we listed “Analytics & Statistical Analysis” first. You need someone on your team with a strong background in analytics to ensure that results are not being misinterpreted and can track and present them properly. This will allow you to avoid mistakes, follow true wins, and ensure credibility is maintained.
Post-test analysis is performed after you call a test. During this phase you’ll look at the testing results at a deeper level. Often, your initial insights or recommendations can be enriched by doing more analysis.
It is always a good idea to have a backup source for your data to confirm results. Most A/B testing and personalization platforms have built-in integrations that you can set up with your analytics platform. This gives you the opportunity to ensure that the data you have collected is accurate across the platforms.
Here are a few ways you can analyze your data:
- Cross-Device: Users can display differences in behavior based on device, even if the test variation is responsive;
- Segmentation: Certain segments of users may convert at varying rates or not at all;
- Geolocation: State or national markets can react differently to your variations;
- Channel: Results may vary based on traffic sources.
To view all of the principles, check out our “6 Foundational Principles of Optimization and Growth” white paper where we provide a more in-depth look into proper testing, research methodology, and more.
Even after learning the key principles, the path to optimization will still require trial and error. Want to get moving more quickly? We’d love to help! You can also contact the 3Q CRO team to learn how you can start seeing gains right away.