SEO in 2023 is a very different beast to SEO in 2013, or even 2018. Although search rankings are no longer as dramatically affected from algorithm update to algorithm update, staying on top of the latest practices and guidelines does make a difference, ensuring that SEOs aren’t caught unaware by a new development or unable to explain a change in ranking.

So what are the current trends shaping search, and what should SEOs do to respond to them?

Part 1 covered the importance of first-hand expertise demonstrated by content creators, as highlighted in Google’s Search Quality Evaluators guidelines, and also looked at what generative AI means for search.

In part 2:

  1. The growing ‘long tail’
  2. Multimodal search and multisearch
  3. Video content and search

For more on SEO and search marketing

Explore Econsultancy’s Best Practice Guides, webinars, and case studies, or get in touch for information about our training and Foundation e-Learning courses.

For more 2024 trends

Register now to join Econsultancy’s Digital Marketing and Ecommerce Trends webinar, January 24th.

The growing ‘long tail’

At the end of 2022, Google started rolling out an interface change aimed at “making it … easier to dive deeper” into a search. Now, Google will suggest related keywords to drill deeper into a search term – for example, searching “England World cup” will lead to Google suggesting that you add “standings”, “squad”, “wins” or “fixtures” to your search, with the additional keywords appearing alongside the News and Images options below the search bar.

These will still redirect to Google News and Google Images respectively, but selecting one of the suggested keywords initiates a new search with the added keyword, with the option to drill down even further with additional keywords in the next screen (so, “England world cup fixtures” becomes “Women’s World Cup England fixtures”). These ‘stack’ at the side of the screen and can be removed individually to zoom back out to the broader search query.

Google has started suggesting additional keywords to narrow down a user’s search in order to help them “dive deeper” into a given topic.

In its blog announcement about the change, Google wrote, “When you conduct a search, our systems automatically display relevant topics for you based on what we understand about how people search and from analyzing content across the web.

“Both topics and filters are shown in the order that our systems automatically determine is most helpful for your specific query. If you don’t see a particular filter you want, you can find more using the “All filters” option, which is available at the end of the row.”

In Econsultancy’s search and SEO predictions for 2023, BrightEdge co-founder Lemuel Park commented that as a result of this update, “[We can] expect searches to get more long tail and sophisticated … The impact of this will be that SEOs will need to place a greater focus on their deeper content and support really specific queries more than broad ones.”

Multimodal search and multisearch

Visual search – the use of a visual input, instead of text, to carry out a search – has been talked about for years (Econsultancy published a marketers’ guide to visual search back in 2018) but it took some time for the trend to gather momentum and move beyond relatively niche, isolated use cases in ecommerce (or Pinterest) into wider web search.

A major milestone in this regard was Google’s introduction of Multitask Unified Model, or MUM, an upgrade to its AI that enabled Google to correctly interpret more complex, multi-part search queries. Notably, Google described MUM as being “multimodal, which means it can understand information from different formats like webpages, pictures and more, simultaneously”.

Since then, Google has only increased its capacity for processing and outputting searches in multiple ‘modes’ – primarily images, but also videos and to an extent, audio. In April 2022, Google announced ‘multisearch’ with Google Lens, which allows searchers to combine imagery with keywords – such as snapping a picture of a dress and adding ‘green’ – to carry out searches that might be difficult to express with words alone.

Google later added to this with ‘Multisearch near me’, which surfaces local results in response to a visual or multimodal search, and also announced that multisearch would be expanding to support more than 70 languages.

A lack of reporting presents challenges for SEOs

While these are exciting developments for searchers, what do they mean in practice for SEOs? Unfortunately, it seems as though reporting of visits from multisearch in Google Search Console and Google Analytics is limited: in October 2022, Glenn Gabe of G-Squared Interactive investigated multisearch tracking in GSC and determined that “there is currently no tracking at this point in Search Console for multisearch.” (More detail on Gabe’s methodology can be found in the linked blog post).

As for GA, Gabe concluded that multisearches are reported as a mixture of Direct Traffic and Google QuickSearchBox, which makes specific tracking of multisearches difficult. As Gabe highlights, it is early days for this feature and usage may not be particularly significant, but it would be valuable for site owners to have insight into what role it plays in discovery if any, and which pages are being surfaced in response to multisearches. At Google I/O in May, Google revealed that Lens is now used for 12 billion visual searches per month – up 50% from 2022’s eight billion – and “a growing number of those searches are multimodal”.

In May 2022, John Mueller offered some advice for SEOs who want to optimise for multisearch:

“…if your images are indexed, then we can find your images and we can highlight them to people when they’re searching in this way. […] [I]f you’re doing everything right, if your content is findable in search, if you have images on your content and those images are relevant, then we can guide people to those images or to your content using multiple ways.”

Perhaps unsurprisingly, then, Google recommends that SEOs optimise for multisearch in the same way that they optimise for ‘regular’ image search – but reporting would unquestionably be helpful in determining whether multisearches really do make a difference.

It’s worth a mention that Bing has just introduced multimodal capabilities to Bing Chat, enabling users to “upload images and search the web for related content”. Again, it remains to be seen how significant this usage will be – overall reporting for Bing Chat is scheduled to arrive soon in Bing Webmaster Tools, but despite reports in late May that a rollout was days away, at the time of writing it appears to still be unreleased.

Video content and search

SEOs have long been aware that incorporating and optimising video opens up ranking and visibility opportunities that aren’t present for other types of content, with Google surfacing dedicated ‘video carousels’ for a significant portion of search results (27%, according to current data from Mozcast, putting their prevalence just below Knowledge Panels at 31.1% and above images at 19.9%).

This appears to be particularly true for short-form videos, which at one time were trialled in a dedicated carousel, and are expected to featured more prominently in Google’s Search Generative Experience (see part 1 for further discussion of SGE), which Google plans to make more “visual, snackable, personal, and human” according to internal documents seen by the Wall Street Journal.

Google has already revealed that SGE will surface videos and images to help searchers gain a “better understanding” of topics, with a demo showing a highlighted video clip accompanying an AI-generated summary.

Like all content marketing, video content should serve a genuine purpose and be informative and useful – but where it does, creating video content can also be a boon for SEO, with Google having published a set of best practice guidelines to ensure that videos are indexed in search.

The key trends to know in SEO – Part 1: ‘Experience’ and generative AI