Enter a search term such as “mobile analytics” or browse our content using the filters above.
That’s not only a poor Scrabble score but we also couldn’t find any results matching
Check your spelling or try broadening your search.
Sorry about this, there is a problem with our search at the moment.
Please try again later.
I've been asking Maciej about the company's use of advanced analytics, how he feels about 'big data', and using analytics for customer retention...
Can you give us a taste of your presentation at Crunch?
I will talk about some common but not so obvious mistakes when planning and executing advanced analytics projects.
I will also show some examples how we use the data to uncover more insights and profiles behind often quite anonymous subscribers.
For the uninitiated, can you explain what advanced analytics is? How is it used at O2?
At a high level, you can think of standard analytics and reporting as a task when you define requirements and no matter who performs the analysis, you will always get the same answer.
It could be a trend in sales or a report of churn levels in a segment or breakdown of margin by region. Most often it’s delivered by linking the data, filtering and building a pivot table.
Advanced Analytics (aka Data Science) involves questions that any Data Scientist would approach and answer differently. Who is most likely to leave? What is the best way we can segment our customers based on their engagement level? Why are our customers leaving and how we can win them back?
A lot of Advanced Analytics can be associated with predictive modeling but not all. It involves creative thinking in ways alternative to reports and pivot tables, always trying to get to the bottom of customer behaviours.
How do you feel about ‘big data’? A useful term or overhyped?
It’s a complete déjà vu – if you look at discussions 10 or 15 years ago about potential uses of 'data mining' in the business, we are having the exact same hype about 'big data' now with the same dictionary of benefits or threats.
Things look indeed differently for specific applications like huge data streams coming from telescopes, particle physics experiments, etc. There is really very little new to that if we are talking about standard businesses like telcos, banks or insurance.
It makes sense to bring more data to analysis as long as the marginal benefit/cost is positive. That’s what offline PoCs are for, to test and learn what may yield an extra benefit.
From what we can observe, if we have nicely designed aggregates, call it 'small data' if you like, then getting deeper and exploding data that covers the same topic yields negligible benefits.
It’s the other, previously unseen dimensions of customer behaviour that can shed new light on a customer’s needs and wants that can be useful exploring.
Sometimes it may involve reaching to new and very detailed (hence big) data sources – like internet usage on smartphones or geographical movements. Another time it’s a simple idea of checking where the first 10 calls are made after activation. As usual, it’s skills that matter, not the size.
I imagine O2 has an awful lot if data. How do you decide which is useful to analyse?
A combination of experience, following hunches and experimenting in the lab on smaller samples before we roll out bigger pieces of analysis.
Can you give us an idea of your team and where it sits within the business?
The Advanced Analytics team sits in the Marketing and Innovation directorate and is separate to the directorates responsible for building price plans, offers and running targeted campaigns. This allows us to independently evaluate campaigns and tell the business what is working and what is not.
Being free from pressure of proving that a campaign is working is a mission-critical differentiator in what we do. We own the Test & Learn and Control Group policy which governs the campaign evaluation process.
Each campaign needs to go through a Test & Learn phase with sufficiently large control groups before it’s released to ‘business as usual’ with smaller control groups.
How are you using analytics for customer retention?
We have quite sophisticated churn models that give us predictions for two, four and eight weeks ahead for each subscriber.
We have also spent a lot of effort trying to understand why the subscribers are leaving and when exactly they make the decision about leaving us – in many cases it’s months before their last activity on the network.
We are also building an understanding of what churn is actually preventable, because it seems that for a large segment of leavers there is absolutely nothing we can do to stop them. Knowing who is who on the subscriber level is the tricky bit.
We have more research streams like that but I would not want to elaborate too much as we consider ourselves as being quite ahead of other telcos with this one.
What are the most useful tools for your job?
Overall it’s a quite typical technology stack: Teradata for heavy data prep, R for experiments, MS Excel for visualisations, SAS Enterprise Miner and IBM Modeller for building predictive and segmentation models.
What’s the key success factor for exploiting analytics?
Having a proper think before even kicking off the computer.
If there is a business question to answer, you need to understand it fully and play the ‘why’ game with the requestor. Getting to the source of 'why do you need this' often redirects the analysis and you may end up with a completely different approach, delivering something really useful instead of just interesting.
Next step is to reflect and try to answer the question using common sense. If the question is about churn, you need to break down what churn really is. It is not just porting, there is also inactivity, involuntary terminations, etc.
They may be broken down further too. Each of different churn types may have different drivers – hence deserves separate models.
Next (the computer is still safely off) you take one target type after another and try to brainstorm: why do people do that? What drives them, when do they make the decision, when they even start thinking about it and can we actually see any traces of that decision making process in the data?
Once we exhaust our ‘common sense’ avenues, we can start looking at some data in a much more defined way, knowing much better what we are looking for.
What are the major challenges in your role?
Building up the team was certainly one of them. The team was created only one and a half years ago and we started from scratch with no infrastructure, everyone new to O2. Now it’s eight highly skilled people, jumping (more or less) smoothly from one analysis to another.
Constant pressure from multiple stakeholders to deliver ad-hoc reports instead of real analytics was another one – but this is changing.
Implementing the Control Group policy was not an easy task either.
What things should businesses be doing with their analytics efforts that most don’t do today?
Classic scenario for introducing Advanced Analytics in companies plays like that: a company creates the function and hires a Data Scientist or consultants.
There is a lack of strong analytical leadership and the quality of analytical deliverables may be varied. Due to fragmented organisational structures, the analytical outputs (like models or insights) are widely ignored.
Due to lack of control groups the effect of targeted campaigns can’t be measured. The Data Scientists get frustrated - because they really hoped to change how the business operates and they leave.
The two game changers I would stress are: provide a strong analytical leadership and support proper control groups.
Econsultancy's Crunch - Data, Analytics and the Rise of the Marketing Geek, takes place on October 10 at Truman Brewery, London. Crunch is the event for the analysts, strategists and boffins who turns raw numbers into insight, then revenue. This event is one of five that make up our Festival of Marketing.