In this project we discuss the design and evaluation of a gaze-based interaction technique for visualizing a phylogenetic tree. In our implementation, users were able to brush, expand, collapse and rename nodes of the tree by looking at the species of interest and using the keyboard. Our goal was to investigate the effectiveness of gaze-based interactions for a specific task (e.g. exploration of tree structure).
we tried different visualizations and discovered that natural and efficient gaze-based interactions are difficult to implement. Our original idea was simple – to use gaze to facilitate users’ perception of a graph. We originally implemented three prototypes enhanced with gaze-based techniques: a scatter plot, a bar graph, and a choropleth map. We wanted the detailed information to be revealed when users focused on a certain region, whereas things in the periphery would stay hidden. We soon realized that this approach was not ideal for users, as having the detailed information shown on a single region prevents them from comparing or analyzing content on other parts of the visualization. Moreover we observed that having the visualization radically change based on the gaze’s location is highly distracting. We concluded that users would prefer to have everything displayed because they are able to choose what they want to see without bothering moving their gaze.
We compared this approach with a purely mouse-based interface. We found the following results:
Users were significantly slower to perform the “brushing”, “navigating” and “analyzing” task when using their gaze. Users were equally fast for the “renaming” task when using a mouse or their gaze. There are several reasons for this pattern of results: the inaccuracy and latency of the eye-tracking technology we used, as well as a certain discomfort for users to control things with their eyes.
Even though eye-tracking techniques can potentially compete with traditional input devices, users still consider those interactions as more demanding. It is not clear whether this feeling would decrease over time with appropriate training. In conclusion, we believe that eye- trackers should be used in conjunction with other interactions techniques for simple tasks (such as switching between windows or selecting a text field). Trying to perform complicated tasks with the gaze is similar to control the rhythm of our breadth: this is a conscious and effortful task. As a conclusion, designing efficient gaze- based interaction is complex and difficult; researchers have not found yet a situation where eye-tracking techniques offer a real advantage over traditional input devices.
As future work, we are interested in developing less disruptive interaction techniques for eye-trackers. More specifically, actions that don’t require visual change on the screen.