No items found.

Your go-to guide for hitch-free eye-tracking studies.

  • By
    Aditi Kant
    March 31, 2020

My first brush with eye-tracking came along in 2002 while I was in college studying to be an Industrial Designer. Samsung set up a lab on the premises with an eye tracking facility by SensoMetric Instruments (SMI), now owned by Apple. As a part of one of my projects, I was to study the scanning patterns of the users on various posters. The eye tracking machine itself was terrorizing. Calibration of the tracker was so difficult while wearing glasses or contact lenses that I was sure I would never finish my project on time. While expressing these concerns to my professor, he (very optimistically) told me how these studies would improve as technology is evolving faster than we think, and he was right! The optical devices that are seen in today’s market are light years ahead of the intrusive options used in the past.

Modern eye trackers are relatively easier to set up and use. At Digital of Things, we use state-of-the-art screen-based eye-trackers in our research studies. We integrate analytical data from facial expression analysis and eye gaze tracking, with qualitative data from the audio and video of the users and combine it all into a full review from our UX experts. We can produce heat maps, gaze plots and cluster maps to show key areas of interest on websites and apps.

You might be familiar with the benefits of eye-tracking studies, or even how to use eye tracking in conjunction with other types of user testing. Being able to see where users are looking while exploring an app or a website is a unique opportunity to get amazing insights into their overall user experience — insights that reveal things such as learning patterns and social interaction methods that would otherwise be looked over.

Technically speaking, eye tracking measures the movements of the eye to determine where someone is looking, what they are looking at and for how long their gaze stays in one spot. Eyes are one of the primary tools used in the decision-making process and that is why eye tracking studies are being used more frequently in the research of human behaviour.

Eye tracking studies are usually more expensive than your run of the mill user testing and require specialised researchers adept at using the technology to oversee the testing. It takes a skilled researcher to conduct these tests!

Throughout this article, I’ll present you with several first-hand insights on the problems of using eye-tracking technology, based on what I’ve seen in my many years of user testing. For example, let’s assume you have a client buy-in along with the time and budget to carry out an eye-tracking study and you have a sophisticated, fully equipped lab at your disposal. There is still a possibility that you may not be able to get sufficiently reliable data. But why? What could be the deterrents?

This is what I will address throughout this article, based on the observations over the last 2 years of conducting eye-tracking studies with nearly 200 participants. I focus on my experience with the screen-based eye tracker Tobii X2–30 Compact. As the name suggests, it is a compact system and works well with handheld device-stands for mobile and laptop. We use this state-of-the-art eye tracker in our lab and it’s also brought to some of our client locations if we need to do any remote testing, due to the compact nature of the system.

Our observations

1. The wobble heads

Even though we thoroughly explain the importance of sitting still to participants before initiating the study, some participants just do not understand how to sit straight and not move their head or body while the eye tracking study is taking place. This is also known as the concept of headbox i.e. the limits in which participants can move without affecting the study. The image below shows the limits a person has when using a screen-based eye-tracker, such as the Tobii X2–30 Compact.

This particular issue is intensified if and when the moderators probe the user with questions, as the users will naturally look towards the moderator and their gaze goes off screen.

2. Long long lashes

Who doesn’t like thick, dark gorgeous lashes? Our eye-tracker, that’s who! Eye-tracking for women participants who wore false eyelashes or had naturally thick and dark eyelashes were not easily read by the eye tracker, if at all. Long eyelashes can confuse the eye-tracker and provide an inaccurate reading.

I experienced this firsthand and was put on the spot to make a quick decision, where after 5 failed attempts to calibrate the eye tracker, I made the choice to skip the eye-tracking session for that participant. The results? I was left with one less data point, a very stressed participant and was hurrying around at the last minute to complete the testing. There have been a few other similar ‘eyelash’ incidents, and we’ve since learned our lesson to be upfront about ‘eyelash etiquette’ at the time of participant recruitment.

3. Those fringes & bangs

We’ve found that some of our participants had haircuts that covered their eyes partially. This hindered the eye tracker’s ability to capture all the eye movements. If the eye tracking machine is unable to calibrate and or track movement, then the data cannot be captured or used.

4. Literal myopes

Unfortunately, we’ve often faced problems with participants who wear contact lenses and bifocal/varifocal lenses. Glasses can cause glare and reflections which impair the detection & data recording by the eye tracking machine. Also, the state of the glasses could be an issue. If the lenses are scratched or dirty, it can cause similar tracking issues.

It’s a shame since most of the clients who we conduct tests for have a general user base with no specific physical restrictions. I mean, do you think Carrefour would care if you shop with your glasses on or off? They’d probably want you to shop however you are most comfortable!

Some participants offered to take their glasses off to get started and help with the eye calibration. However, this poorly affected their screen scanning, reading and task completion time. Although this is rare, participants who have had LASIK or cataract surgeries cannot participate as there are many difficulties with tracking, so it’s best to screen them beforehand.

5. 50 shades of grey

The lighter the eye colour, the tougher it is for the eye tracker to track eye movements. This means that for a participant with blue eyes for example, the data collection & accuracy will be lower than those with brown eyes. As Dubai is a melting pot of cultures, we have found users from around the world with many different eye colours. It is important to note that the eye tracker could not be calibrated for participants that wore coloured contact lenses, which could be due to the lens, the colour of the lens or both.

6. The traditionalist

For some of our clients, Arab nationals or Emiratis in particular, are a key user group. The participants are likely to be wearing one of the many forms of national dress, which for women could include a hijab (meaning to cover your hair) and the niqab (meaning to covering your head & face, only exposing your eyes).

For obvious reasons, the eye tracker works well for the variations where the face is visible but pose problems where the eyes are covered. The solution for this is to screen participants who cover their faces and/or ask them beforehand if they are comfortable removing any coverings that might hinder the study.

7. Murphy’s Law

This law of nature is so appropriate, especially when tech stuff and user testing is involved. “What can go wrong, will go wrong” as Murphy puts it. There were times when suddenly the eye tracker stopped recording mid session. Sometimes, we calibrated the eye tracker but again, it stopped recording the eye gaze for no apparent reason. Experiencing technical difficulties during a session is not ideal, especially when clients are observing! It made us miss old-fashioned usability testing.

8. Hindering kinesics

There are always times between tasks when participants are not actively interacting with the mobile device or mouse, leaving them with free hands — and idle hands lead to mischief!

If participants made hand gestures, they blocked the eye tracker. We’ve even had a few cases where a participant folded their hands while thinking which blocked the eye tracker. Again, this could be because we did eye tracking in conjunction with traditional user testing where probing from the moderator is a must.

When either one or more of the above issues crop up, we end up having insufficient data points and will either need to run additional tests or interpret the data we have collected in a different way.

Keeping the above in mind, it is advisable to have a sample size of 30+ participants for eye-tracking studies where heatmaps are used for drawing conclusions and interpreting results. When qualitative eye tracking is planned i.e. instead of analysing and using the heatmaps, a reference of eye gaze is required and then conclusions can be drawn. For these types of studies, 6–7 participants are enough.

Here in the Digital of Things lab, we use the latter model for our research studies. However, missing even one person’s gaze leaves us with less reliable data. This being said, eye-tracking is great when scannability, layouts and campaigns need to be tested.

Lessons learnt

From all of our experiences in the Digital of Things lab, we’ve created a small checklist below that helps us recruit and moderate effectively — keep this on hand when you’re conducting your next user testing! It might just save you an extra day of re-tests!

1. Screen participants

This is easily done using a pre-screen survey. Simply ask participants if they match the following criteria, and then don’t recruit participants who wear bifocal glasses, coloured contact lenses, have had eye surgeries, wear semi-permanent or false eyelashes or wear conservative traditional dress.

2. Prepare participants

  • Share a message to participants in advance of the test, reminding them to avoid anything which might pose a problem during the study. This could be avoiding wearing heavy eye makeup, tying back hair, bringing makeup remover with them, etc.
  • During the test, the moderator should inform participants to restrict their hand and body movements and to continuously look at the screen while performing their tasks in order to mitigate user error.

3. Recruit backup participants

To be safe it is advised to keep a few participants on standby in case you find that data has been lost from more than 1 participant. These participants should know they could be called last minute.

4. Avoid unneeded communication with participants during testing

As moderators, we also restrict speaking with the participants while they complete a task. If discussions are needed, these should take place only after the task has been completed so that complete eye movements are captured, and any unnecessary eye movements are avoided.

5. Analysis

It is not advisable to consider the moments during which the moderator has probed about a specific component on the page during analysis, as the participants are fixated on the component in an unnatural way which could skew results.

If you are a business based out of the UAE, give us a shout if you want a demo of the technology we use here in the lab. Better yet, do you have a website or app that could benefit from user testing? Let us know if you’d like us to provide you with an approach to run any type of user studies — we’re happy to help!

Our other resources