Look, no hands! Exploring data with your eyes

Can you imagine using a computer without a mouse, keyboard, or touchscreen? Why would we even need to do that? In this article, we will learn how virtual on-screen ‘lenses’ can be controlled using eye-tracking technology, to magnify and show additional details on charts. Using your eyes to control a computer has some unexpected challenges, but they can be addressed using well-designed interfaces.

The “mantra” of interactive charts

Charts often have to represent so much information that it is not possible to show all of it at once, because there isn’t enough room on the screen. With printed charts, there is not much recourse, other than to print several charts showing different aspects of the data.

With interactive digital visualisations, however, you can explore different aspects of the data by clicking, tapping, selecting, hovering, and typing on it. Consider a digital map, such as Google Maps. Unlike a printed map, you can move around and zoom to show arbitrary detail. You can display or hide certain ‘layers’ of information (e.g., satellite, traffic). You can type to navigate it, and you can annotate it with personalised locations.

For digital data visualisation, common interactions are zooming, filtering, and showing data labels on hover (or by selecting an option). You are given an overview of the data, allowed to zoom and filter, and inspect details-on-demand. This trio of interactions has appeared in various forms in the history of human-computer interaction (all were present in Sutherland’s seminal Sketchpad graphical editor from 1960), but they were popularised by University of Maryland Professor Ben Shneiderman. In a highly cited paper, he calls it the “Visual Information Seeking Mantra” and subsequent research has deferentially dubbed it “Shneiderman’s Mantra”.

Above: a chart with all the labels showing. How cluttered! Instead, we can use details-on-demand. Below: a lens allows us to selectively magnify and display the label for just one data point at a time.

But I don’t want to touch the computer

Shneiderman’s Mantra is easy to implement with taps and clicks, but what if you don’t have a mouse, keyboard, or touchscreen? It seems an odd question, but there are in fact some scenarios where this is the case. With large public display walls, you can’t give every passerby a keyboard or mouse, and a touchscreen would be costly and fragile. In operating theatres, sterility is a concern that motivates “touchless interaction”. Or, the intended user might have differences in their motor skills which prevent them from using conventional input mechanisms.

To solve this, you could use hand or skeletal tracking for “in-air” interaction, such as waving, pointing, and gestures such as pinching or thumbs-up. You could use microphones and speech recognition for voice control. Or you could use eye-tracking, for controlling the interface using your gaze.

My student Abhishek Chander and I decided to build a system that used eye-tracking to interact with charts. Eye-tracking has some benefits over speech and in-air gestures. There’s less to learn: you need only move your eyes, as opposed to using a specific language of spoken commands or motions. Eye-tracking is accessible to those whose speech is not recognised by the system, or those who have difficulty making common hand and body movements. It is a smoother experience for groups of concurrent users: imagine several people at a public display wall, all trying to speak commands at the same time, or doing in-air gestures and smacking each other!

Eye-tracking devices are really cool. They work by shining infrared light into your eyes. The light bounces off various structures in the eye (the cornea, lens, retinal blood vessels) and back into a specially designed optical sensor. The reflection pattern is computationally analysed to reveal the direction in which your eye is pointing. You can also track where someone is looking just from a normal video feed of their eyes, without infrared sensors (this was the dissertation topic of my friend and colleague Erroll Wood), but it is not yet as accurate. What’s essential is calibration: having more eye gaze data leads to more accurate tracking both in infrared and visible spectrum.

Commercial eye trackers have all sorts of applications. They are used by supermarket chains to study how a shopper’s gaze falls on different areas of shelving, so that they know where to place high-value items. They are used by people with motor impairment, such as that caused by cerebral palsy, to interact with computers. They are used in research to study questions like whether adding colours to computer code can help programmers better understand it.

The curse of the eye-tracker

Eye-tracking suffers from what is known as the “Midas touch” curse. Recall the myth of the avaricious King Midas, who wished that everything he touch turn to gold. Alas, when this wish was granted, he realised his terrible folly, as he could no longer eat or drink, as no sooner did food touch his lips did it turn to gold. Similarly, when you rely on your eyes to “touch” things in an eye-tracking interface, the display changes depending on what you’re looking at, and if this is badly designed, can result in an erratic and unpleasant experience. In academic terms, this is known as the “gaze multiplexing” problem.

King Midas turns his beloved daughter to gold. Source: Wikimedia Commons

Have you ever tried to click on a menu that appears when hovering on something, but when you move your cursor to click on it, the menu disappears, since you are no longer hovering on the original thing? Isn’t it frustrating? Similarly, for example, a gaze-driven chart could show the label for a data point you’re looking at. In order to read this label, you would need to direct your gaze momentarily away from the data point. This should not cause the label to disappear; that would be really frustrating. But at the same time, if you want to focus on a different data point that happens to be near the label, we do want the label to go away and the focus to change.

To solve the gaze multiplexing problem, we invented an interface consisting of concentric translucent lenses. It is best understood through the figures accompanying this article. A small inner rectangle shows the area being focused on. This area is magnified and shown within a larger outer rectangle. The larger rectangle can simply be for magnification, or it can provide additional detail, such as a data label. The rectangles are translucent so that they do not obscure the view of the data underneath it.

Our prototype, showing a 2x magnification lens.

Besides Midas touch, eye-tracking suffers another problem: eye movements are extremely jittery and erratic. Combined with the small errors in measurement from the eye-tracking equipment, this means that the raw gaze coordinates (i.e., the estimate of where on the screen you’re looking) from the eye tracker cannot be used directly to control a lens. They must be smoothed.

To solve the problem of eye-tracking jitter, Abhishek invented an algorithm he called Dynamic Exponential Smoothing (DES). DES takes each new gaze coordinate and compares it to the previous one to calculate a “smoothed” coordinate, i.e., where the lens should move to. If the new gaze coordinate is very close to the previous one, the “smoothed” lens does not move. This keeps the experience stable for very small eye movements. If the new gaze coordinate is somewhat close (i.e., you shift your gaze to a nearby point), the lens begins to move slowly, and if it is far away (i.e., you look at a totally different part of the chart), the lens accelerates. In practice, moving a lens with DES feels somewhat like pulling a toy car along the floor with a piece of string. Small movements in your arm do not move the car; they are absorbed by the slack in the string. But large movements pull the string taut, and begin moving the car.

Together, dynamic exponential smoothing and the concentric lens design solve the gaze multiplexing problem. The viewer’s gaze does not need to move a lot to see the magnified view, or read a label, and so the position of the lens stays stable and predictable.

We built three examples of lenses: a magnification lens, a data label lens, and an uncertainty-reducing lens. The third is unusual, and builds on the idea of using charts to control uncertainty, which is explained in more detail in this article. In a nutshell, the premise is that if you have some data that has been computed only approximately (e.g., due to time and resource constraints), and you wish to selectively re-compute some of the data with greater accuracy, you can use a gaze-directed lens to choose which areas of the data to re-compute.

Is this fancy eye-tracking interface any good?

We conducted an experiment to see how our gaze-driven system compared to using a mouse. Participants looked at 30 different datasets, and answered questions such as “what was the highest ever price of oil?”, and “what year did the market crash?”. These questions could only be answered using the lens tools, as they either required a label to be read, or a subtle feature to be inspected through magnification. Each participant answered 15 questions using the gaze-driven lenses, and the other 15 by controlling the lens using a mouse.

To our surprise, we found that the gaze-directed lens was as fast as the mouse driven interface. That is, participants could answer questions as quickly using just their eyes as they could with the mouse. In fact, per average question, they were 0.1 seconds faster with the gaze-directed lens. This is unusual because gaze-directed interfaces are typically slower, as gaze is multiplexed (i.e., has to serve multiple purposes), whereas with mouse driven interfaces the user gets to use both their gaze (for reading, scanning, searching) as well as their hands to manipulate the interface simultaneously. Moreover, participants in experimental studies typically have a lifetime of experience with mouse control, and little or no prior experience with eye-tracking, which gives the mouse-driven interface a huge advantage. This was the case in 2015, when our study was conducted, and still is in 2022, at the time of writing this article, but perhaps in the future, gaze-driven interfaces will become more commonplace.

To our surprise, we found that the gaze-directed lens was as fast as the mouse driven interface. The gaze-directed interface also promoted more efficient exploration of the data.

The gaze-directed interface also promoted more efficient exploration of the data. Participants spent more time investigating key regions of charts (such as peaks, troughs, and inflection points) with the gaze-directed interface than they did with the mouse, about 3 seconds more per question. That may not seem like much, but people spent around 30 seconds on each question, and so a 3 second difference represents 10% of the per-question time budget, which is quite a lot. Besides the raw quantity, they spent a greater proportion of their time too: with the gaze-directed interface, they spent about 50% of their time inspecting the key regions, whereas with the mouse this was only 35%. This huge increase in efficiency might have arisen because, when correctly calibrated, it is much faster to move the lens just by looking where you want it to go, than it is to move the mouse. Thus with the gaze-directed interface, participants can spend relatively more time looking at areas of interest, and less time consciously moving the lens around.

Conclusion

Some situations, such as with public display walls, make mice and keyboards impractical. Eye-tracking can help, by allowing users to zoom, filter, and get details-on-demand from charts depending on where they are looking. There are challenges such as smoothing the eye-tracking coordinates, and avoiding the “Midas touch” problem, but these can be solved using algorithmic techniques such as dynamic exponential smoothing, and a concentric lens design. Experimental evidence shows that these systems can be as efficient as using a mouse, and can even promote more efficient data analysis.

And now, a summary poem:

Gaze in a bottle, affixed to the stars
Rays infra-reddened, on Jupiter and Mars
Now you see it, now it’s gone

The undetectable detected by closer sight
The untouchable touched by ocular sleight
Bounteous knowledge, leisurely won

Lens, window, portal
Empowers the mortal
on wisdom’s run

Want to learn more? Read our study here (click to download PDF), and see the publication details below:

Chander, Abhishek, and Advait Sarkar. “A gaze-directed lens for touchless analytics.” In Proceedings of the 27th Annual Conference of the Psychology of Programming Interest Group (PPIG 2016) (pp. 232–241). 2016.

Notes and references

… Shneiderman’s Mantra … Shneiderman, Ben. “The eyes have it: A task by data type taxonomy for information visualizations.” In The craft of information visualization, pp. 364-371. Morgan Kaufmann, 2003. Originally published as a technical report in 1996. Available online: https://drum.lib.umd.edu/bitstream/handle/1903/5784/TR_96-66.pdf

… Sketchpad graphical editor … Sutherland, Ivan E. “Sketchpad a man-machine graphical communication system.” Simulation 2, no. 5 (1964): R-3. Available online: https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-574.pdf

… public display walls … Perry, Mark, Steve Beckett, Kenton O’Hara, and Sriram Subramanian. “WaveWindow: public, performative gestural interaction.” In ACM international conference on interactive tabletops and surfaces, pp. 109-112. 2010. https://dl.acm.org/doi/fullHtml/10.1145/2541883.2541899

… sterility … motivates “touchless interaction” … O’Hara, Kenton, Gerardo Gonzalez, Abigail Sellen, Graeme Penney, Andreas Varnavas, Helena Mentis, Antonio Criminisi et al. “Touchless interaction in surgery.” Communications of the ACM 57, no. 1 (2014): 70-77. https://dl.acm.org/doi/fullHtml/10.1145/2541883.2541899

… dissertation topic of … Erroll Wood … Wood, Erroll William. “Gaze Estimation with Graphics.” PhD diss., University of Cambridge, 2017. Available online: https://www.repository.cam.ac.uk/handle/1810/267905

… adding colours to computer code … Sarkar, Advait. “The impact of syntax colouring on program comprehension.” In PPIG, p. 8. 2015. Available online: https://advait.org/files/sarkar_2015_syntax_colouring.pdf

… using charts to control uncertainty … Sarkar, Advait, Alan F. Blackwell, Mateja Jamnik, and Martin Spott. “Interaction with Uncertainty in Visualisations.” In EuroVis (Short Papers), pp. 133-137. 2015. Available online: https://advait.org/files/sarkar_2015_uncertainty_vis.pdf

Sarkar, Advait, Alan F. Blackwell, Mateja Jamnik, and Martin Spott. “Hunches and Sketches: rapid interactive exploration of large datasets through approximate visualisations.” In The 8th international conference on the theory and application of diagrams, graduate symposium (diagrams 2014), vol. 1. 2014. Available online: https://advait.org/files/sarkar_2014_hunches_sketches.pdf

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s