The rise of the machines

Machine learning is used to predict how people will react, which is basically what all marketers want to understand.

Applications include:

  • personalising advertising
  • informing stock levels
  • providing customer service (fairly nascent)
  • conversion optimisation (copy and web design)
  • recommendations
  • lead generation (from unstructured text analysis)
  • image recognition
  • search (natural language processing)
  • fraud detection
  • sentiment analysis

Though in some areas such as search and advertising, machine learning has been working in the background for a while and is an implicit part of their functionality, in other areas the rise of machines presents a cultural issue.

In lead generation, for example, it’s understandable that those with 30 years experience of an industry are sceptical when told that an algorithm will be better at finding the right accounts to target.

I recently spoke with Aman Naimat, SVP Technology at Demandbase, the company that has developed Demandgraph, an AI solution for account targeting. As impressive as the technology is, Aman confirmed that cultural issues are probably the most pressing challenge when it comes to adoption (over integration, for example).

What do we know about human-machine trust?

The notion of human-machine trust has probably never been as pertinent as it is today, with semi-autonomous cars already on the market and self-driving cars well on the way to realisation.

Would you want a self-driving car to sacrifice you, the pilot, in order to save the lives of multiple pedestrians when an accident is inevitable? Most people agree on the ethical answer to this question (self-sacrifice), but wouldn’t want to drive such a car.

Of course, if people refused to buy such autonomous cars, more traffic deaths would occur; a Catch-22 situation.

Away from such gory matters, how do people feel about machines helping them take decisions in nuanced, work-based scenarios?

At the Singapore University of Technology and Design, Jessie Yang and Katja Hölttä-Otto designed an experiment. Human participants took part in a memory and recognition task using an automated decision aid.

The task involved the memorising of images, which were later to be selected from a pool of similar images. The automated decision aid provided recommendations but, crucially, was designed to do so reliably for some participants and not so reliably for others.

As detailed by MIT News, the results revealed that the unreliable automated aids were overtrusted. Conversely, the highly reliable automated aids were undertrusted.

On reflection, this seems somewhat like human nature. We may be keen to take advantage of AI but perhaps ultimately we don’t fully trust it.

Another experiment, this time at MIT by Yang and Julie Shah, Department of Aeronautics and Astronautics Assistant Professor, went one step further, looking at how interface design affects so-called ‘trust-reliability calibration’.

The pair were interested in alarm displays in high risk industries. Rather than the traditional “threat” or “no threat” alarm, often developed with very low thresholds (for obvious reasons), the introduction of likelihood alarm displays (how likely is the risk event?) could help to mitigate the “cry wolf” effect.

Over time, these assessments of likelihood may ensure that trust in the warning system remains higher.

Okay, this may seem like a far cry from marketing software, but the principles carry across industries. The more educated the end user, the better the relationship with intelligent technology.

Complexity = vulnerability

Research by the British Science Association revealed that ‘half of those surveyed would not trust robots in roles including surgical procedures (53 per cent), driving public buses (49 per cent) or flying commercial aircraft (62 per cent).’

It’s arguably only education that can allay these fears.

However, as Kalev Leetaru points out, writing for Forbes, even with an increased understanding of how machine learning works, the complexity of web-based services can still scupper trust.

Kalev describes systems ‘built on top of a layer of trust of other systems such that an error, vulnerability, or mistaken understanding at any level can cascade across the system’. For example:

  • in 2013 Microsoft’s Azure service faced a worldwide outage due to a simple expired SSL certificate
  • in 2012 a leap day caused an outage when one Microsoft cloud system misunderstood what another was doing
  • in 2013 when a single developer at Amazon was able to impact an entire data center at the height of the Christmas shopping season
  • At the end of 2015, Google experienced an outage when connecting a new network link in Europe manually, which overrode automated safety checks

With education, we can observe that however sophisticated machine learning becomes, it still relies on other infrastructure and data quality. Ultimately, it is still human-limited and we are still refining our trust-reliability calibration.

This can be observed anecdotally. Look at the tweet below, something we’re all familiar with. Retargeting and recommendations are incredibly powerful when implemented correctly, but the rules don’t always stack up.

So, how does this impact culture and strategy?

Enough of my secondary research into human-machine relationships. Aside from educating their employees, what do companies need to bear in mind when considering AI strategy?

The main thing is data quality and scope. Supervised AI is only as good as its inputs, and all marketers should be aware of these inputs when relying on machine learning, just as they are when relying on statistical analysis.

Richard Sargeant, director of ASI Data Science, a company that helps governments harness AI, has recently written about the way that siloed departments can hinder the effectiveness of AI.

Here’s the important bit about data scope:

“Government is usually organised by service [(education, health etc)]. ..But this is not sensible in an AI age: if an agency is good at running one kind of digital service, chances are they’ll be good at running a bunch of them.

“…The most important factor in determining whether [a department] succeeded [in digital] wasn’t their knowledge of their departmental subject matter, but whether they had the organisational leadership and culture to develop and run digital services.

“And the departmental silos continue to make it very hard for datasets to work together.

“Why don’t we check benefit records against the death register to avoid paying benefits to people who are dead? Because they are run by separate departments.

“Why don’t we have one consistent list of companies in the UK? Because HMRC and Companies House maintain their own separate lists.

“The quality of machine learning and AI is heavily dependent on the quality and volume of data.”

Education, data quality and volume, transparency within the organisation – all are vitally important.

We still have a ways to go

Many of today’s machine-learning powered solutions are ‘human in the loop’ solutions. That means they rely on humans to validate some of their findings and to provide feedback into the system.

Humans in the loop can move AI from 80% accuracy to 90%+. And, of course, algorithms are limited by the humans that set them a-whirring, and the data they are using.

That means the role of humans has not been diminished, rather it has increased in importance. We have to understand and govern this stuff.

In the words of Mark Zuckerberg, ‘we know how to show a computer many examples of something so it can recognize it accurately, but we still do not know how to take an idea from one domain and apply it to something completely different.’

So, for the next decade at least, marketers and sales people should look upon AI as the incredibly powerful tool that it is.

Now read: