Bertrand Schneider

Studying in-situ Joint Visual Attention

The goal of this work is to describe a methodology for synchronizing two eye-tracking goggles and computing measures of joint visual attention (JVA) in a co-located setting. In our study, dyads of students interacted with different version of a tangible interface designed for students in logistics.

The video above shows the video recorded by each mobile eye-tracker and a ground truth on the bottom. We use common points known in each perspective (i.e., the two mobile eye-trackers and the top down view) to perform an homography and remap students’ gazes onto the ground truth. The points used for the homography are the four corners of each fiducial marker detected on top of the small scale shelves. On the right side, a cross recurrence graph shows moments of joint visual attention with the location of this moment: red indicates the first warehouse, green the second one, and blue the last warehouse. Cross-recurrence graphs show time for the first participant on the x-axis, time for the second participant on the y-axis, and colored pixels when the two participants are looking at the same location. Thus, a dark diagonal represents synchronized moments of joint attention, while off-diagonal pixels show moments of joint attention with a time lag.

Preliminary results are reported in Schneider et al. (accepted): we found that this measure of joint visual attention is a good proxy for the smoothness of the collaboration between two students, and that this measure is significantly correlated to participants’ performance (and in some cases, their learning gain).

Research team: Bertrand Schneider1 Kshitij Sharma2, Sebastien Cuendet2, Guillaume Zufferey2, Pierre Dillenbourg2, Roy Pea1.

1 Stanford university, 2 Swiss Institute of Technology (EPFL)

Publications

Schneider, B., Sharma., K., Cuendet, S., Zufferey, G., Dillenbourg, P., & Pea, R. (under review). Unpacking The Perceptual Benefits of a Tangible Interface.  ACM Transactions on Computer-Human Interactions.

1 Comment

    Super cool! Very impressive how you managed to get the data of two mobile eyetrackers on one flat ‘ground truth’. From our own attempts we know the many technical challenges one faces then. Looking forward to reading the publication(s).

Leave a Reply