As human beings, we use our voices for conversation. When we interact with voice interfaces, therefore, our natural instinct is to apply the same rules that we would to a human conversation.
We expect to be understood, but more than this, we expect the entity we’re conversing with to remember the history of our conversation and understand the context of any following remarks.
For some time, major search companies like Google and Bing have worked to teach their search engines to understand queries in natural language. Natural language search queries are queries that sound natural spoken aloud, such as, “How high is the Empire State building?”
They often begin with question words (“When…?” “How…?” “Why…?”), contain stop words (“a”, “the”, “of”, “for”) and full sentences.
The default type of search query is a keyword search, which searches for a string of unconnected (and usually ungrammatical) terms, like “empire state building height”. We’ve been conditioned into using these kinds of queries to interact with search engines, as this is the language they understand best.
However, with the advent of voice interfaces and more sophisticated AI, we’re now teaching the machines to understand our language. Over the past few years, search engines have made great advances in their ability to correctly understand natural language queries, and even respond to follow-up queries which depend on the previous searches for context. This is known as conversational search.
In this article – part three of our ongoing series on the state of voice search in 2018 – I’ll take a look at past and present of natural language search, consider how well we can converse with search engines, and give some tips on how to optimise for these types of search queries.
A brief history of natural language search
Natural language in the ‘90s: Ask Jeeves and START
Though natural language search has only recently come into its own, natural language search engines have been around almost as long as web search has.
Remember Ask Jeeves? The 1996 search engine encouraged its users to phrase their search queries in the form of a question, to be “answered” by a virtual suited butler. Ask Jeeves was actually ahead of its time in this regard – other search engines (like Google) were having greater success with keyword-based search, and in 2010, Ask Jeeves finally bowed to the pressure from its competition and outsourced its search technology instead of continuing to develop it internally.
Ironically, had Ask Jeeves been founded about fifteen years later, it most likely would have been at the cutting edge of natural language search, ahead of the very search engines that squeezed it out.
Ask Jeeves in its heyday
And Ask Jeeves isn’t the only natural language search tool from the 1990s. The START Natural Language Question Answering System, developed by MIT’s Artificial Intelligence Lab, has been online since 1993, and proudly bears the title of “the world’s first Web-based question answering system”.
While not a web search engine per se (it functions more like an online encyclopaedia, drawing answers to questions from its internal database), START is a fascinating look back at what was already being accomplished with natural language search more than two decades ago.
According to START’s website, the question answering system “parses incoming questions, matches the queries created from the parse trees against its knowledge base and presents the appropriate information segments to the user. In this way, START provides untrained users with speedy access to knowledge that in many cases would take an expert some time to find.”
While this is hardly the case any more in 2018 (long live Wikipedia and Google’s Knowledge Graph), like Ask Jeeves, START was certainly a tool ahead of its time.
The big players join the game
For the more mainstream search engines, natural language didn’t really appear on the radar until around 2011. It may surprise you to learn that Bing was years ahead of Google in this regard: in 2011, it incorporated “natural language search” into Bing Shopping, teaching its search engine to interpret multi-part queries such as “cashmere sweaters under $100”.
While this might seem to be a way away from true natural language, it was a significant step at the time. One of the difficult things about natural language search queries is that they can be highly specific and contain multiple components, which the search engine needs to be able to both parse individually and understand the relationship between.
A question such as “Where can I buy red shoes for under £50?” seems simple to us, but would have confounded a search engine less than ten years ago.
Not only was Bing programming its search engine to understand natural language in 2011, its blog post announcing the development also encouraged searchers to use voice search:
At the mall and wondering if you’re seeing a bargain? Just fire up the Bing for Mobile app on your phone and say “sony digital camera under $120”. Voila, it’s that easy.
Bing was so successful in cementing its reputation for natural language search that a comparative study of “three natural language search engines and Google” carried out in 2013 listed Bing as one of the three natural language search engines, alongside Ask.com and Hakia, an early semantic search engine.
It would take Google a couple more years to overtake Bing in this regard, announcing in 2015 that “The Google app now understands you a little better” and that “complex questions” were welcome. Google’s search engine was now capable of understanding superlatives (such as “What are the largest cities in Texas?”) and queries with multiple variables (such as “What was the population of Singapore in 1965?” or “What was the U.S. population when Bernie Sanders was born?”).
What about natural language search outside of our English-language, western bubble? Baidu is China’s largest search engine, and well-known as a leader in both natural language processing and artificial intelligence (which is integral to parsing complex natural language queries).
While it’s difficult to pin down any articles or studies which deal explicitly with how well Baidu responds to natural language queries, a 2017 post to Baidu’s Developer blog [Chinese-language source] states that natural language processing “has been an important part of search technology since the birth of Baidu”.
Anecdotal reports, such as this question in Quora, suggest that Baidu is the most effective search engine at parsing natural language queries in Chinese, but it still may not rival Google’s instant accuracy in English.
The era of conversation
Alongside these developments in natural language search, search engines were getting better at understanding context.
2013 was the red-letter year for Google in this regard: this was the year that Hummingbird, Google’s semantic search algorithm update, took flight. At the time, Google was coy about what the algorithm change involved, but revealed that it affected “around 90% of searches” and allowed Google to better understand concepts, relationships between concepts, and more complex questions.
In the same year, at Google’s I/O developer conference, Google previewed what conversational voice search would look like in Chrome on desktop. Desktop might not seem like the most natural home for conversational search – or any voice search for that matter – but it might have been easier for Google to deploy the feature on desktop first.
Google’s new functionality gave users the ability to ask things like, “Will it be sunny in Santa Cruz this weekend?” and then follow up by asking, “How far is it from here?” without needing to specify what “it” was. The week after I/O, Google quietly pushed the feature live for all Chrome users on desktop, as spotted by Danny Sullivan of Search Engine Land.
Sullivan demonstrated that Google’s conversational search could retain an implied context through three successive follow-up queries: from asking “How old is Barack Obama?” to asking about his height, his wife’s name and then her age, all without needing to re-specify the subject of the query.
Image: Danny Sullivan/Search Engine Land
Bing wasn’t far behind its rival, announcing conversational search on Bing desktop in August 2014, which showed off Bing’s ability to interpret and answer full-sentence search queries, and like Google, retain context throughout several successive searches.
Interestingly, voice search doesn’t seem to have been required to use this functionality, although now, if you type follow-up queries into Bing desktop search, it will treat them as separate, unconnected searches, as will Google.
How good is conversational search really? Google vs Bing
Fast-forward to 2018, and Bing and Google are each vying with the other to show off their voice search prowess. Google is touting the ability of the Assistant to cater to your every whim, while Microsoft boasts of the number of voice searches carried out via Cortana on Windows 10.
Both of them have had live conversational search for a few years now, so I decided to put each search engine through its paces to see how they dealt with natural language voice queries and conversational search.
First up was Google, who gamely answered my question of, “Who is the UK prime minister?” with “Theresa May is the Prime Minister of United Kingdom”.
Follow up with “How old is she?” and the Assistant chirps back, “She’s 61 years old.”
Trying to throw Google a curveball, I then asked, “What about Germany?”
The resulting search went through as “Who is the Germany Prime Minister?” which, while not perfect wording, still returns the correct answer.
It took another subsequent search for Google to forget the context of our conversation – when I asked, “How long has she been in power?” Google searched this exactly, returning results about the length of Queen Elizabeth II’s reign.
Still, not half bad, and I probably wouldn’t ask so many follow-ups in a normal situation; Google performs well enough for the kind of conversational searches you might naturally ask, and only falls down when you really try to push it.
Next, I tried out the same search using voice on Bing’s mobile app. Bing performed well enough for the initial search, although it didn’t read the result back to me, but presented it silently. That might be due to an inability to interface with the Google Assistant on Android – I’m not sure what the politics are there.
My, that’s a flattering photo you’ve got there.
As an aside, I’d like to register how much I hate the dialogue box about cookies that pops up at the top of every Bing search window. There’s no option to get rid of it at all – you either have to press “Learn More” or let it sit there. Bing is rapidly losing points on UX, and we haven’t even got to our follow-up search yet.
Unfortunately, Bing didn’t redeem itself from there. After I asked “How old is she?” it searched this phrase exactly, with no awareness of the context of my query.
So, no points for Bing on conversational search. But I decided to test out the two search engines on another query – something more practical that I might ask while out and about with my phone.
“How do I get to the nearest Sainsbury’s?” I asked Google.
“Here are your directions,” the Assistant helpfully responded, pulling up a map of the local area. Very useful – except that its map assumed that I was driving, when I don’t in fact have a car (and probably wouldn’t be using it to drive through central London if I did).
“Can I walk there?” I asked.
Correctly interpreting my meaning, Google searched, “Can I walk to the nearest Sainsbury’s” and presented me with walking directions this time.
It’s relevant to note that when making conversational voice queries, Google inserts the subject for you, so that even if your actual query didn’t restate the subject of the search, it will go through to Google’s search engine as a full query, subject and all.
This should reassure anyone who is worrying about their ability to optimise for fragments like, “Can I buy it tomorrow?”
So, full marks for Google on conversational local search. Next, I tried the same query with Bing:
Bing correctly presented me with the locations of three nearby supermarkets, but didn’t offer to direct me to any one of them.
So, Bing was 0 for 2 so far, at least compared with Google’s sheer intuition and helpfulness. However, I felt like I might not be giving Bing a fair shot. Because I have an Android phone, I was searching using Google’s native Android app, with Google’s built-in voice assistant. Maybe, if I used Microsoft’s dedicated voice assistant, my results from Bing would be of better quality.
I downloaded the Cortana app for Android and tried it out with my initial query from earlier. “Who is the UK Prime Minister?”
“Here is what I found,” Cortana replied, showing me the words “Theresa May”, with a series of images of the UK Prime Minister.
“How old is she?” I then asked. After “Thinking” for a few seconds, Cortana admitted,
“I think I may have lost the thread of our conversation.”
I think you have, Cortana.
How to optimise for natural language and conversational search
In the two previous instalments of this series, I assessed how commonplace voice search really is on mobile and smart speaker devices, drawing the conclusion that true voice search – as opposed to using voice commands – is a lot less prevalent than the level of hype in our industry would have you believe.
However, if you believe that voice commands and voice interfaces are the future, you may well argue that it’s worth understanding how they could change the way we search and the way we create content to satisfy those searches.
Here are a few prevalent trends we’re already beginning to see as a result of natural language and conversational search. If you do set out to optimise for voice search, now or in the future, these are all trends you should be catering to.
Longer sentences and more colloquial language
Let’s start with the most obvious one: natural language searches are longer, and typically use a different register of language to keyword searches.
Rather than being stilted and formal, they’re familiar and chatty, with contractions, stop words (common connecting words which search engines are normally programmed to ignore) and more specifics. While this makes it harder to match your content to someone’s query exactly, there are things you can do to make your content more “friendly” to natural language search.
- Adjust the register of your content to be more colloquial. Unless there’s a reason that your brand’s tone needs to be extremely formal, try to write your content to sound like a person – this will also make it sound more natural when read aloud as a voice result.
- Consider stop words in your keyword research. Yoast SEO has a good article on how stop words can affect the kinds of results that you get back for a search – “shoes for kids” as opposed to “kids shoes”. Make sure you’re on top of the differences.
- Do your homework – use natural language search queries, see where you rank for them, and test different types of content to see which performs best.
More personal searches
In May, Think With Google published a very interesting piece entitled, ‘How conversational searches change your search strategy’ which revealed how the language we use in search is changing.
In particular, it highlighted that people are increasingly searching using personal language – “do I need…” or “…for me”. As Google writes,
What we’re describing here is searching with natural language in a manner reminiscent of asking for advice. Much like when they talk to a person, people are starting to use “I” in their searches.
Google noted that mobile searches for “do I need” and “should I” have each grown more than 65%, while mobile searches starting with “can I” have grown by more than 85%. While this doesn’t tell us how many of these kinds of searches there are, it’s a noteworthy change.
Google’s advice to marketers in light of this trend is to “create responses that meet nuanced customer needs” – again, consider conversational queries and write content that will satisfy these types of queries.
Natural language and conversational search queries have given rise to more searches being phrased in the form of a question. Creating content designed to answer these questions can improve your chances of ranking for them, and it will also boost your chances of getting a coveted featured snippet or direct answer box.
A common tip for catering to question queries is to create FAQ-style content pages dedicated to answering questions. However, while they might rank well in search, these aren’t great from a user experience or a conversion standpoint: users can find them hard to navigate and pick apart, they often fall out of date, and they don’t tend to lead the user on to a purchase of your product or service.
Instead, provide purposeful information that is specifically designed to meet the user’s goals and yours (A List Apart has a good how-to guide) – or if you do need to create an FAQ page, build it specifically as a welcoming landing page for new visitors, and make sure it’s kept fresh and easy to parse.
Where might voice search go from here? In the final part of this series, I’ll consider the future of voice search, and when and how we might see it become the norm.
Read the other parts of this series: