At first glance, it may appear that Google has just compiled some information on the side of a search result, leading to other related search results, and in a way it has. Danny Sullivan of Search Engine Land notes:
At the time, I felt what was described seemed more an extension of things Google had already been doing rather than a dramatic shift. Now having seen it first-hand, I stand corrected.
A step forward
Up until now, Google has been a simple search engine leading users towards web pages which its algorithm thought they were searching for based on their search string.
The Knowledge Graph indicates a huge change in that method. Through it, Google is attempting to establish a deep associative context around the entities, and has to try and understand the query rather than just regurgitate what it believes is the closest result.
Though this concept of a smarter web isn’t a new idea, it by far the most promising experiment conducted to date. Back in 2001, UK computer scientist and inventor of the World Wide Web, Tim Berners-Lee wrote an article in Scientific American Magazine about the “semantic web” in which he described his vision as “a new form of Web content that is meaningful to computers and will unleash a revolution of new possibilities”.
What Berners-Lee wanted was a version of the web that not only had data but understanding, to the level that it could provide answers on its own. That is exactly what Google is trying to do here. Not just links to answers, but answers themselves.
Take a simple search for “Tim Berners-Lee”:
We are greeted with photos so we know what he looks like, a description so when know who he is at a glance, his education and published works so we understand his credibility, and a list of other very intelligent people that are seen as his peers.
How the Knowledge Graph works
Google is attempting to understand the meaning behind the query and provide you with information confident in the knowledge that it is exactly what you were looking for. With their vast birds-eye view of online data, combined with harvested structured schema, RDF and micro-format data, search logs, and mark-up data from their semantic database Freebase, Google can identify concepts to display within the Knowledge Graph that add value to the user experience by supplementing search results with facts.
This sort of behaviour for a standard search term cannot be done without a solid understanding of exactly what that search term is referring to and unswayable confidence in the data that has been collected.
When unsure, Google will help you (and potentially themselves using click data) filter via entity-based search results. If you’re looking for “Cardiff” for example, Google will guess that you might be referring to the town, the football team, or the airport, and give you an option to narrow down your search results.
At the moment this is easy for famous and noteworthy people, places and things thanks to freely available data from Freebase and DBpedia, but what about the wider web?
While not currently part of the greater Knowledge Graph, Google still has some understanding of search terms outside their database of entities. For example, type “BoxUK” into Google UK and you will be greeted with a snippet from our Google+ page which gives updates on our business and the whole digital industry.
But type in “BoxUK Cardiff” and you get a very different response in the form of a map to our Cardiff office as well as contact information.
Similarly, you can also see entity-based results in a natural language search. This would be a search string based on a question or a complete sentence. For example, a search for “Population of London” returns a rich snippet giving you that information within the search results.
Though entity search is still very much in its earliest stages and the Knowledge Graph represents only a start towards a true semantic web, we can see that there are huge strides being made. As digital marketers, it is our job to understand these changes and provide strategy to our clients to offer them maximum visibility.
What can we do today?
From a business perspective, there are some simple steps you can take to begin optimising your website to provide search engines with more data.
Schema.org is a shared mark-up vocabulary, created by search engines including Bing, Google, Yahoo! and Yandex to give them a greater understanding of a webpage and its contents.
If you haven’t already done so, I’d strongly recommend using structured mark-up to give the search engines a better understanding of whom, what and where you are. If you are unfamiliar with schema mark-up, you can make use of a free tool to create the correct mark-up quickly at http://schema-creator.org/.
For Google at least, Google+ is a great source of data and I expect to see Google+ content being used more often in the future.
Get your business set up and complete a profile to begin to link data to your website. There has been much talk in recent months of AuthorRank (also known as AgentRank) and its argued effects on rankings, so here are two ways you can experiment with your own brand and writers:
Have authorship information appear in search results for the content you create by connecting a personal Google+ Profile with the content via an onsite tag. Check the current guidelines for correct implementation at Google Support.
This can also be implemented on any guest blogging campaign you are running, giving searchers a friendly face within the SERPs when searching for thier favourite websites.
Implement rel=publisher by adding the rel=”publisher” tag to the head of your site. This will give Google a strong signal that your website and Google+ business page are the same entity.
Once you have all of your tags implemented, check they are working correctly using a neat free Rich Snippets tool from Google. This will show you exactly what will display within the search results for that URL.
Freebase and Wikipedia
Freebase (then part of MetaWeb) was acquired by Google back in 2010 and is now a major source of information for the Knowledge Graph, as is Wikipedia. Freebase is free to join but it is important to maintain the credibility of your listing.
Wikipedia is much more selective with its listings and I would not advise creating your own business page. Before you do anything, check out the video below to learn more about Freebase and be sure to read their guidelines on adding organisations to either Wikipedia or Freebase.
Local listings are a huge part of Google’s database and they have been showing rich snippets for local businesses for a while. Apart from numerous other advantages that a local listing gives, you can hand-feed Google information about your website and business location, further increasing your brand signals.
As you can see from our “BoxUK Cardiff” search example, Google is showing local listings in place of the Knowledge Graph wherever possible. I would be very surprised if this data is not utilised to help enhance their current database in the future.
There are some fantastic resources on the web to learn more about SEO in relation to the new entity style of searching such as:
Entity Search Results: The On-going Evolution of Search by Justin Briggs
10 Most Important SEO Patents: Part 6 – Named Entity Detection in Queries by Bill Slawski
It’s Time To Stop Ignoring Entity Search by Dan Shure
Search in the Knowledge Graph era by Gianluca Fiorelli
Over to you
I hope you enjoyed this walk through the Knowledge Graph but we’d love to hear your thoughts. Where do you see the search engines taking entity search in the coming years?