Search Engine Journal has pointed out an excellent article by Bill Slawski on the complicated business of how search companies re-rank results.

It is important to get to grips with some of these factors if you are actively involved in SEO, since rankings can be influenced by any number of factors that may be outside of your control.

The article lists 20 ways how pages might be re-ordered after their initial indexing.

By linking to various patent applications and research papers, it also offers some clues about where search engines are heading.

These points will be interesting for anybody that undertakes keyphrase monitoring on a regular basis.

Let’s look at a few of the more important factors.

Firstly, there’s the duplicate content issue…

1. Filtering of duplicate, or near duplicate, content

Search engines don’t want the same page or content to fill search results, and pages that are substantially similar may be filtered out of search results.

And onto personalisation. Mke sure you’re not logged into Gmail when compiling that SEO report if you want the non-personalised view…

3. Based upon personal interests

A search engine may try to rerank results for a search to a specific searcher based upon past searches and other tracked activity on the web from that person.

Bill also notes that search is becoming territory-specific. Remember that will default to for UK internet users. You can change to .com in the URL after you’ve entered your search query, to get a global view of your search rankings…

5. Sorting for country specific results

It’s possible that a searcher may wish to see results biased towards sites coming from a specific country. Someone could possibly explicitly choose a preference for a specific country, or the system may try to dynamically understand such a preference based upon IP address.

This next one seems to have become more important of late. News stories, for example, may start off with a high Google ranking on a related keyphrase search, but will most likely fall as time goes by…

8. Reranking based upon historical data

Involving the age of documents, and of links to documents, and other historical data, pages can be reranked based upon a large number of time related factors.

We’re expecting that accessibility will play an increasing role in governing search rankings… could this spell the end of 100% Flash websites? Our fingers, like yours, are firmly crossed…

12. Reranking based upon accessibility

Google recently came out with a specialized search that reorders pages based upon accessibility in their Accessible Web Search for the Visually Impaired.

Finally, here’s something that will be most noticeable for anybody listed on Google News. For example, it seems that you can be too early to a story, or too late, in terms of clustering. Adopt the blogosphere people…

19. Reranking by looking at blogs, news, and web pages as infectious disease

An analogy is used to disease-propagation models in this IBM patent application to describe how segmentation into topics paying attention to time-based changes and additions to those topics in the blogosphere and on bulletin boards might tell a search engine which topics and terms are popular, and where information about those might be located.

Good stuff, Bill.

Related reading:

Search Engine Marketing – Best Practice Guide