Image: Shutterstock

“…if you can’t convince the company that having people who are more skilled is going to be good, then you should probably leave the company.”

This comment from Zaid Al-Qassab, former CMO of Channel 4 and now Global CEO of M&C Saatchi Group, was made during a panel session at Econsultancys’ Marketing Capability Leaders Forum, earlier this year. His insight captures the essence of a challenge I’ve seen repeatedly whist working closely with global marketing and commercial capability leaders.

While many of us agree with Zaid’s sentiment, experience has shown me that convincing the Board of the tangible value of learning is far from a given. It’s a challenge that demands more than just belief – which on rare occasions with particularly progressive boards, is actually enough – but more often it requires quantifiable proof in the form of numbers and metrics.

Measuring the impact of specific investment initiatives is a persistent issue for most organisations, and this is very much not just a challenge for learning. Isolating drivers of impact is hard whether it’s for learning programme, a capital investment or a new hire (or anything else that involves an investment in time and money). So, let’s not kid ourselves this is a perfect science for anyone.

But you have to try and do it as well, if not better, than other business functions.

However, depending on which study you choose to read, only 4%-10% of learning programmes measure real business impact, with nearly 60% relying solely on easily obtained (and unconvincing on their own) metrics like completion.

This simply isn’t good enough.

A recent survey by the Learning Performance Institute of over 500 L&D leaders identified measuring impact and return on investment (ROI) as one of their top 5 challenges — with these stats it should be no surprise.

We believe (and rightly so, in my opinion) that effective learning programmes are critical for maintaining a competitive edge, especially in dynamic fields such as marketing and ecommerce. However, to be considered a vital business function, Capability/L&D must be able to show tangible results that the wider business can understand on their own terms and therefore get behind. Measuring the impact of learning interventions ensures that investments are ongoing and robust, rather than being the first thing the CFO strikes out of the cost base when times are tough.

Econsultancy is exhibiting at the LPI’s Learning Live conference, 11-12 September in London, where engagement is one of five event themes.

Econsultancy focuses on four buckets of measurement with our clients and advocates a balanced scorecard approach across four key vertices.

1. Learner Reaction

Understanding learner reaction is the first step in measuring the effects of any training programme. It provides immediate feedback on the training experience and its perceived value.

More Capable and Confident: Post-training surveys can assess whether participants feel more capable and confident in their roles. These surveys should include questions that gauge learners’ self-assessment of their skills before and after the training. Confidence should not be under-estimated as a measure and this article from our sister brand Marketing Week outlines the challenge for marketers.

A lack of confidence in one’s own abilities does not lead to a high performing and innovative individual.

Net Promoter Score (NPS): Asking learners whether they would recommend the training to a colleague is an effective way to measure satisfaction. It has its critics, but I believe NPS is a valuable metric as it reflects the overall enthusiasm and perceived value of the training. High NPS scores indicate that the training is well-received and considered beneficial by participants.

Learning isn’t working for most of your employees: Here’s how to engage and inspire the 80%

2. Skills Acquisition

Evaluating skills acquisition involves measuring the knowledge and skills that learners have gained from the training programme and must be compared to aims set from the start.

Skills Assessment: From our experience working with diverse clients, I’ve found that evaluating progress requires a clear understanding from both learners and stakeholders of where they currently stand and where they need to be. This insight led to the development of Econsultancy’s Digital Skills Indices in Marketing and Ecommerce, which have been instrumental in guiding organisations through meaningful, measurable skill development.

Test Your Knowledge: Implementing quizzes and tests before, during, and after the training can help quantify knowledge gain. These assessments should cover key concepts and practical applications relevant to the training content.

Engagement Data: Monitoring engagement data, such as participation rates, time spent on learning modules, and completion rates, provides insights into how actively learners are engaging with the training material. High engagement levels correlate with better learning outcomes.

Simulations: Practical simulations and role-playing exercises are used to assess how well learners can apply new skills in a controlled environment. These activities not only test knowledge but also help build confidence and competence.

3. Behaviour Change

The goal of any training programme is to drive behaviour change which improves performance (otherwise nothing really changes in the long term). Evaluating behaviour means looking at whether skills have evolved and how learners apply them in the workplace.

360 Feedback: Implementing 360-degree feedback allows for comprehensive evaluation of behaviour change. Feedback from peers, managers, and direct reports provides a holistic view of how the learner’s behaviour has evolved post-training. Clearly, your learning provider cannot conduct this feedback process on their own and often organisations will not want an external provider to be involved at all, but that doesn’t mean it’s not important and that your learning provider should not input into the process design.

Output Evaluation: Some skillsets can be examined with pre and post output analysis. Using human evaluators, or AI at scale, the quality of specific skills can be measured. This help proves impact and informs future development opportunities.

Productivity Analysis: Building on output evaluation, analysing productivity metrics before and after the training can reveal improvements in efficiency and effectiveness. This can include metrics such as time to complete tasks, ongoing quality of work, and output rates.

Corporate learning is wasted money unless its applied: Here’s how to put new skills into practice

4. Business Impact

Assessing business impact is, without a doubt, the most critical and challenging level of evaluation.

I’ve worked with organisations to overcome the inherent complexities of linking training outcomes to business performance indicators, a task often complicated by external factors like shifting market conditions. However, with the right approach—rooted in proven and consistent methodologies and a deep understanding of business dynamics—these challenges can be effectively navigated.

Competitor Benchmarking: Comparing your team’s performance against industry benchmarks and competitors can help gauge the relative impact of your training programmes. Improved performance metrics in comparison to competitors indicates that your training is effective and likely to deliver competition beating performance.

Alignment to Business KPIs: Ultimately, the success of a training programme is measured by its effects on Key Performance Indicators (KPIs). This involves linking training outcomes to central business objectives such as increased sales, improved customer satisfaction or a higher market share. Developing a measurement protocol to capture these effects is difficult, but not impossible. Time bound tests using control groups, A/B testing and regression analysis are well established methods to ascertain impact and account for external factors. Clear alignment with KPIs demonstrates that the training is not just an isolated activity but a strategic lever for business growth.

A balanced scorecard approach

Measuring the impact of learning requires a multi-faceted approach. It’s not enough to rely on simple or single metrics; a comprehensive evaluation framework is essential to truly justify investment in L&D.

The balanced scorecard approach, which I’m advocating, isn’t easy to implement and including all the measures outlined above is sometimes unrealistic. But if you start here and work on what is possible then you will end up in a much better and more defensible place than if you don’t.

This approach not only validates the effectiveness of learning programmes but also ensures they remain a strategic (and financial) imperative in an ever-evolving business landscape

If you’re looking to create a workforce learning programme that encompasses these effective strategies and delivers truly measurable learning outcomes, feel free to reach out to me and let’s discuss how Econsultancy can support your organisation in achieving its goals.

Richard Breeden is Managing Director of Econsultancy and a Fellow of the Learning Performance Institute (LPI).