Edwyn Raine, Head of Digital at Jaywing Australia, offers some advice for getting started with conversion rate optimisation, including what research and resources to draw on, and how to start a testing culture.

A common trait among fast growing technology companies is their propensity to experiment. This culture of experimenting is engrained in their existence, born out of start-up garages and a need to hustle, pivot, take risks and move fast to survive. However, with much of their products and services delivered digitally, experimentation has naturally occurred on public facing websites, marketing initiatives, operational processes and other facets of their businesses. Each test will have a KPI or objective, but all will ultimately aim to improve profitability or customer experience.

According to research by Forrest Research, companies that experiment grow 8 times faster than companies that don’t. Booking.com, the online travel platform, is a prime example of this, claiming to run on average 25,000 experiments each year, often with over 1,000 live simultaneously. While not the sole impact to their fortunes, they have seen revenues increase five fold in the last decade and share price is at record highs despite the significant impacts of the pandemic.

But tech behemoths aside, experimentation, and specifically conversion rate optimisation (CRO), is something that almost every business (and website) can benefit from. And best of all, the less you’ve been doing in the past, the greater likelihood of finding needle-moving tests to run.

Getting started with experimentation and CRO

Here are 3 key pillars which will set you up for early success when starting CRO.

1. Be guided by your research

Using research and data not only helps support the argument that something needs to be tested, but it also increases the likelihood that the test will prove successful. Here are some of the easiest, yet most effective ways :

Web analytics – Google Analytics is a gold mine for finding evidence-based opportunities and reasons to test parts of a website. Even with a basic tracking implementation it’s possible to see drop off rates along key user journeys segmented by device type, channel type or landing page. With some quick enhancements of custom tracking you can measure any interaction and quantify how much something is likely to be worth testing. Moreover, it gives a good scale of the amount of traffic that a test could influence, and therefore the overall impact a positive result could have on total conversions or revenue.

Behavioural analysis – There are dozens of great tools, some free and some paid, to help you understand website users in more detail. Hotjar, Inspectlet and FullStory all allow you to watch live users engaging with your website in video recordings. UserTesting is another tool, where you can set tasks for individuals to complete and get one-on-one feedback on how they find the task and the challenges they encounter.

User research and surveys – Sometimes qualitative data from real customers is more telling than numbers. Again, there are plenty of free tools (for example, Hotjar Surveys or Usability Hub) which require minimal technical capabilities to setup and run surveys, either on the website or off. Feedback from real customers can often be a real catalyst for testing and support the process.

Competitor research – While imitation is not the best marketing strategy, it can be an effective way to quickly recover ground on competitors and form hypotheses on what could be improved. If you’re in retail, look to the largest ecommerce businesses. If you’re trying to acquire leads, review how the biggest travel and technology companies do it. Try to understand why they have chosen to do certain things and the implications it may have on conversion rate, this is the forming of your hypothesis, from which you can potentially improve it further.

While these types of research may be used at the start of a CRO project, many of them should be on-going. Whenever a new campaign is being launched or a different marketing initiative is in the plan.

2. Assemble a multifunctional CRO team

Having the right team is as important as having good ideas when it comes to CRO. Unlike the management of several other marketing channels, it requires a diverse set of skills and is unlikely to work if tasked to a single person. As a smaller business you may need each person to wear multiple hats, but certainly the best optimisation work is done collaboratively.

CRO testing tools – While not a person, these tools are going to be critical to your CRO initiatives and may be able to fill interim roles in your team. Google Optimize offers great free functionality, while VWO, Webtrends Optimize and Optimizely all offer more comprehensive paid platforms which allow the creation, running and monitoring of tests. It is worth watching demo videos to understand which may be best for your project.

CRO Lead – The project lead should ensure everyone is doing their role, and working together. They’ll likely have a mixture of the skills in the group, but in bigger teams will delegate work to the right person for the job.

Data analyst – Responsible for gathering the data and evidence on which testing is planned. Knowing your way around analytics platforms is crucial, as is using data storytelling to share your insights and findings.

Marketers – Marketing is likely to influence tests and add additional variables. Solid communication between marketers and the CRO team helps plan tests and ensure they align with marketing objectives.

Designers and developers – This is often a sticking point for many businesses, whether that is using developers to get CRO tools setup, or designing and developing the test variants entirely. For smaller organisations and projects, it may be possible to get support from freelancers or the managed services teams from testing platforms. This helps to fill gaps in your team and ensures the project can still go ahead even if internal resources are hard to come by.

3. Constant improvement over perfection

Several common factors which limit the performance of a CRO initiative, relate not to the testing itself, but the way the project is run and perceived. While these often aren’t the sole reason that experimentation turns unsuccessful, they are often key factors which can be overcome.

Working at the right speed and capacity – how quickly the team is able to move and act dictates how quickly results can be acquired and wins put on the board. That being said, be careful not to finish a test before it’s been proven out. Many times I’ve seen tests seesaw multiple times before they end up concluding. To avoid this, find a statistical significance calculator or utilise the built in functionality of your testing tools. In regards to the point above, I find it is important to have a mixture of test types, some which are smaller and quicker to run with less developer resources, and some which are bigger campaigns likely have more significant impact.

Opinions and assumptions – Many CRO specialists would agree, it is common practice to have well considered hypotheses disproved in an experiment. This is something that team members need to overcome, accepting that only a test will tell the real answer. Using scoring of your test hypotheses can be a good way to think rationally about the potential impact they can have, and how different they may be to setup. We often find it is best to start campaigns on less prominent pages (e.g. not the homepage), so that we can get some quick wins to build a case for greater testing freedom without every member of the C-Suite getting involved.

It is okay to fail in tests – Every test has a learning, while not all of them will result in significant uplift, they tell you something that you didn’t know previously. In fact, several of the biggest experimental successes I’ve been a part of, have come as follow ups to tests which didn’t play out as per the hypothesis. It is important to set stakeholders expectations with this, and ensure they understand that it may take a few weeks or months to get big wins on the board, but almost every test is useful regardless of the result.

As your organisation’s maturity in CRO and experimentations grows, so can the types of tests being run. This could mean experimentation in new channels and departments, testing against more granular segmentation and exploring personalisation. Do you have any other tips or recommendations for marketers looking to expand their experimentation and testing activity? Leave a comment below.

What is conversion rate optimisation (CRO) and why do you need it?