hands at keyboard with padlock symbol
Image: Shutterstock

A joint report from Salesforce and the Retail AI Council that surveyed close to 1,400 retailers has found widespread investment in generative AI – but a lack of structure and guardrails that would allow retailers to use it effectively and above all, ethically.

The research, which surveyed retail decision-makers across eight countries including the United States, Canada, Australia, and the UK, found that 81% of retailers have a dedicated AI budget, of which an average of 50% is earmarked for generative AI.

With that said, fewer than a third (30%) of respondents to the survey say they are currently executing on generative AI (as reported by Diginomica). But an additional fifth are reportedly realising the benefits of generative AI and gauging further investment. Nineteen percent of retailers surveyed are “strategising” and working on business cases, while 13% are “exploring” or “considering” the use of generative AI.

Salesforce and the Retail AI Council’s findings suggest that retailers will need to do more to put effective guardrails in place as they implement generative AI within their businesses. Fewer than two thirds (62%) of retailers surveyed said that they have guidelines to address transparency, data security, and privacy when it comes to the ethical use of generative AI in technology, as well as commitments around trustworthy and unbiased outputs.

This is despite retailers also citing bias as the top risk when using generative AI, with half of respondents noting this as a concern. Other major concerns with generative AI included hallucinations (cited by 38%) and toxicity (cited by 35%).

Retailers are right to have concerns about the potential downsides of generative AI – and therefore must ensure that they implement it responsibly, with the appropriate protections put in place to guard against bias. In that vein, another concerning finding of the report is that 51% of respondents say their generative AI output is fully automated without any human intervention, while 41% say there is “light human intervention”.

Only 8% said that there is full intervention by humans in the AI output.

The overwhelming takeaway from Econsultancy’s generative AI trends for 2024 was that human involvement is “absolutely fundamental” – and yet in practice, generative AI may incorporate less human input than it should, at least in retail.

Retailers should walk before they run with data & generative AI

“The AI revolution is about data, trust, and customer experience,” according to Rob Garf, GM of Retail and Consumer Goods at Salesforce. However, the Salesforce and Retail AI Council report paints a picture of retailers who are struggling with data – which could impact consumer trust, particularly if it leads to a flawed implementation of generative AI.

The report found that just 17% of respondents have a complete, single view of their customers and can harness their customer data effectively – with nearly half (49%) still in the preliminary stages of building, or even considering, the creation of a single customer view.

And while 67% of respondents say they can fully capture customer data, fewer than two-fifths (39%) are fully able to clean the resulting data, with a similar percentage (42%) able to harmonise it.

As Salesforce’s writer observed, “The inability to unify and harmonize data means that a retailer’s generative AI model could deliver ineffective or inaccurate results, or responses laced with toxicity and biases.” This only further underscores the need for guardrails against AI bias, and guidelines around transparency and data privacy, mentioned above.

Two-fifths of retailers (40%) struggle with using their data to make decisions, with close to half (47%) struggling to make it accessible – again indicating problems with applying data at a more basic level before generative AI can even be considered.

Research from Adobe’s newly-published 2024 Digital Trends Report, produced in collaboration with Econsultancy, found that consumers were more concerned about data governance than anything else, with 63% ranking “assurance that my personal data is being used responsibly and securely by the brand” as “critically important” to meeting their CX expectations – far ahead of other concerns such as efficient customer support (36%) and seamless interactions across online channels (34%).

And 67% said they “would be more open to granting permission to use [their] data if brands were more transparent about how they were using and securing it”, while 65% agreed that they were “worried about how much data brands hold about [them]”.

Analysing the report findings, Econsultancy Editor Ben Davis noted that “While it may not be entirely surprising to see most consumers rank data security as critical to CX expectations … it is more pertinent than ever.

“Consumers want to know their data is being looked after, especially in the hands of generative AI.”

The findings from Salesforce and the Retail AI Council suggest that retailers need to learn to ‘walk’ (implement data fundamentals) before they can ‘run’ (use data to underpin generative AI) – not only to assure customers that they can trust the retailer’s use of generative AI, but to reassure them that their data is safe and proportionately used in the retailer’s hands.

Econsultancy offers training in AI, data and analytics, and runs marketing and ecommerce academies for global.