Eye-tracking has been used in web design for many years. However, the widespread preconception is that it takes PhD skilled technicians - plus long consulting hours - to make any sense or use of people’s eye gaze data.

The value from eye-tracking has been directly related to consultancy skills, but shouldn’t it be more about real users?

Eye-tracking has been used mainly as a qualitative tool because of technical reasons. The hardware was difficult to operate and only few consulting houses had the access and capability to run the tests in their lab conditions. Operating labs and recruiting people from consumer panels is expensive and forced consultancies to stick with small sample sizes to fit into client’s budgets. Small sample sizes have held back the wider acceptance of eye-tracking analysis in web design.

In 'Eye-tracking 2.0' devices will be taken to users, not users to devices. This fundamentally changes the speed and sample size of users who can be eye-tracked for analysis purposes.

Quantitative eye-tracking speaks the voice of the customer, not of the consultant. It takes to test about 50 people (why?) to achieve reliability in the visual analysis for any piece of media. Statistical significance of this sample size allows conveying real user design preferences in neutral and objective manner. Visual metrics and animations of user interaction gain their own stand-alone value as models of real user behavior on designers and marketers desktops.

Consultant expertise and qualitative insights are extremely valuable, quantitative eye-tracking complements them with sound numbers. Technological progress has made data collection procedures and packaging 'a standard issue'. Professionals can now spend their time on more demanding and value-adding activities than running user tests or analyzing gigabytes of eye-tracking data.

Reliable quantitative visual analysis is now available as a given tool and consultants can focus more on fundamentals. Much like fund managers do when using outsourced data feeds, but making their own investment decisions.

Eye-tracking was often oversold in the past, creating well-deserved skepticism towards the technology. The 'before-and-after' case studies of websites being redesigned based on only heatmaps of 10 people rightly upset many industry professionals.

As eye-tracking hardware improves and operational models for analysis develop there will be less ‘magic’ and more of the real stuff: identifying user preferences and employing that knowledge to achieve better web design.

Mihkel Jäätma is co-founder of eye-tracking company Realeyes .

Mihkel Jäätma

Published 13 March, 2008 by Mihkel Jäätma

6 more posts from this author

You might be interested in

Comments (3)


Robert-Jan van Diepen, CEO at DiepbiZniZ Consulting

Interesting post. I'm conducting eyetracking studies in the Netherlands. Do you know if there is done scientific research about the stability of heatmaps? In your post you write about 50 participants to get statistical significant conclusions about the viewing behaviour. Or is the significance occurring when 30 participants are tested?

over 10 years ago

Mihkel Jäätma

Mihkel Jäätma, Founding Partner at Realeyes

Hello Robert-Jan

Thanks for a good point You're making here! The precise sample size to achieve statistical significance depends on the page that is tested. Sophisticated pages with more design elements need larger samples, simple design with fewer elements is OK with smaller samples.

The link in the article describes a page of average complexity. As you can intuitively see there: 30 people is almost satisfactory for that particular page.

There are methods used in molecular genetics for comparing images quantitatively that can give you a rock-solid answer in each case, but the rule of thumb is that 30 starts to be ok in web and 50 is enough in almost any web page.

Does this logic seem to apply also with your industry and expertise?


over 10 years ago


Robert-Jan van Diepen, CEO at DiepbiZniZ Consulting

@Mihkel: Thanks for your explanation! We do not test with large samples as we should. The main problem is that with a larger sample the price of a particular project will increase significantly.

over 10 years ago

Save or Cancel

Enjoying this article?

Get more just like this, delivered to your inbox.

Keep up to date with the latest analysis, inspiration and learning from the Econsultancy blog with our free Digital Pulse newsletter. You will receive a hand-picked digest of the latest and greatest articles, as well as snippets of new market data, best practice guides and trends research.