Bertrand Schneider

Toward Collaboration Sensing

Making sense of collaborative eye-tracking data

Nowadays massive datasets are becoming available for a wide range of applications, with education no exception: Cheap sensors can now detect every student movement and utterance. Massive Open Online Courses (MOOCs) over the web collect every click of users taking classes online. This information can provide crucial insights into how learning processes unfold in situ or in a remote situation. However, researchers often lack the tools to make sense of those large datasets; this work proposes additional ways to explore massive log files and describe how collaboration unfolds based on gaze patterns. Eye-tracking data is of particular interest for me, because the technology is becoming ever cheaper and ubiquitous. Several eye-tracking devices are now affordable to the general public, not just to researchers, and there have been multiple interesting attempts at using regular webcams (such as the ones integrated in laptops) to perform basic eye-tracking tasks. Even though the data generated by those low-cost devices is still far from being perfect, there is a trend suggesting that their price is steadily decreasing and their accuracy improving. On the long run, it’s likely that every single device found in the market will be equipped with some kind of eye-tracking technology.

The dataset

I previously conducted an experiment where dyads of students (N=42) remotely worked on a set of contrasting cases. The students worked in pairs, each in a different room, both looking at the same diagram on their computer screen. Dyads were able to communicate through an audio channel over the network. Their goal was to use the displayed diagram to learn how the human brain processes visual information. Two Tobii X1 eye-trackers running at 30 Hz captured their gaze during the study. In the “gaze” condition, members of the dyads saw the gaze of their partner on the screen, shown as a light blue dot, and they had the opportunity to disable this overlay by pressing a keystroke (interestingly, none of the students chose to deactivate the gaze awareness tool); in the control “no gaze” group, they did not see the gaze of their partner on the screen. Dyads collaboratively worked on this task for 12 min; they then read a textbook chapter for another 12 min. This text provided them with explanations and diagrams about visual processing in the human brain. The structure of the activity followed a PFL (Preparing for Future Learning) type of learning task (i.e., contrasting cases followed by a standard instruction). Students finally took a post-test and received a debriefing about the study goal. I found that this intervention—being able to see the gaze of their partner in real time on the screen with the gaze awareness tool— helped students achieve a significantly higher quality of collaboration and a significantly higher learning gain compared to the control group. Additionally, the two eye-trackers running captured students’ eye movements during the study and stored these data as logs; because of technical issues, we only have the complete eye- tracking data for 16 pairs (N=32).

cc-both

The contrasting cases that students had to analyze

Goals

This work has several goals. The first is to provide an alternative approach for exploring eye-tracking data, involving data visualization techniques. I conjecture that uses of visualization techniques for representing massive datasets can provide interesting insights to re- searchers. Previous work has sought to develop visualizations for representing dyads’ moments of joint attention (cf. the cross-recurrence graph below); I want to propose an alternative and perhaps more intuitive way of visualizing this particular kind of data, e.g., by building networks that represent students’ shared visual attention. The second goal is to compute network measures based on those graphs, so as to examine whether some metrics are significantly different across the two experimental groups. Those metrics can provide interesting proxies for estimating dyads’ quality of collaboration. Finally, I tried to automatically predict students’ quality of collaboration by feeding network features into machine learning algorithms.

high_res-cross_rec

Cross-reccurence graphs are the standard way of visualizing dual eye-tracking data. The x-axis shows time for subject 1, while the y-axis shows time for subject 2. Dark points on the diagonal represent moments of joint visual attention (JVA): group 1, on the left, exhibits low levels of JVA, while group 2 on the right is highly synchronized. 

Using fixations as nodes and saccades as edges in a network

To construct graphs from gaze data, I divided the screen into 44 different areas based on the configuration of the diagrams learners were shown during the study. Students had to analyze five contrasting cases; the answer to the top left and top right cases were given. Possible answers were given on the right. Students had to predict the answer of the three remaining cases. I thus segmented the screen into squares, which provides me with 30 areas that cover the diagrams of the human brain and 8 areas that cover the answer keys. In our approach, edges are created between nodes when we observe eye movements between the corresponding areas of interest. The weight of an edge is proportional to the number of visual transitions between the corresponding screen end-points. A first (unsuccessful) attempt used individual as the unit of analysis for the graph. Those networks were too dense, and too highly connected to be useful. The next attempt involved building one graph for each dyad. Here, I wanted to capture the moments in which dyad members were jointly looking at the same area on the screen. The nodes correspond to the screen areas, and edges are defined as previously (i.e., number of saccades between two areas of the screen for an individual).

network-big

Networks built with eye-tracking data. The graph on the left shows a group with a high quality of collaboration; the  graph on the right shows a group a low quality of collaboration.

From a data visualization perspective, this approach conveys key patterns in collaborative learning situations. The top left graph above shows a dyad in the “no-gaze” condition; one can immediately see that these students rarely shared a common attentional focus; nodes are small and poorly connected. The graph on the top right represents a dyad in the “visible-gaze” condition and is a strong contrast to the previous example: here students are looking at common items much more frequently and those moments of joint attention provide opportunities to compare diagrams. Nodes are bigger and better connected.

Force-Dyads

All the networks generated from the current dataset

Based on this new dataset, we computed basic network metrics. The variables below satisfied the parametric assumptions of the analysis of variance that we used (i.e., homogeneity of variance and normality). We found that in the visible-gaze condition, there were significantly more nodes (F(1,30)=8.57, p=0.06), with bigger average size (F(1,30)=22.15, p<0.001), more edges (F(1,30)=5.63, p=0.024), and more reciprocated edges (F(1,30)=7.31, p=0.011). Those results indicate that we can potentially separate our two experimental conditions solely based on network characteristics. Furthermore, several measures were significantly correlated with the groups’ quality of collaboration (see the rating scheme by Meier, Spada and Rummer; 2007): the average size of a node was correlated with the overall quality of collaboration (r (32)=0.62, p=0.039), as well as all the sub-dimensions of the collaboration quality rating scheme. Other metrics were correlated with various metrics of the graphs (for more details, see Schneider & Pea, 2014). Finally, we used those metrics with a machine learning algorithm and found encouraging results when predicting students’ quality of collaboration (again, for more details see the paper referenced below).

Conclusion

Those preliminary results show the relevance of using network analysis techniques for eye- tracking data. In particular, I found this approach fruitful when applied to social eye-tracking data (i.e., a collaborative task where the gaze behaviors of each member of a dyad are recorded simultaneously and made visible to the other member). In summary, this work provides three significant contributions. First, I developed new visualizations to explore social eye-tracking data. Researchers can take advantage of this approach to discover new patterns in existing datasets. Second, simple network metrics might serve as acceptable proxies for evaluating the quality of group collaboration. Third, I fed network measures into machine learning algorithm, which seems to suggest that those features can predict multiple dimensions of a productive collaboration. As eye-trackers become cheaper and widely available, one can develop automatic measures for assessing the dynamics of people’s collaborations. Such instrumentation would enable researchers to spend less time coding videos and more time designing studies and exploring patterns in their data, thus providing augmentation tools that enable humans and computers to each play to their strengths in the human-machine systems for studying collaboration. In formal learning environments, such measures could be computed in real time; teachers could employ such metrics of ‘collaboration sensing’ to target specific interventions while students are at work on a task. In informal networked learning, collaboration sensor metrics could trigger hints or provide other scaffolds for guiding collaborators to more productive coordination of their attention and action.

This work won the Best Paper Award at the LAK13 (Learning Analytics and Knowledge) conference held in Belgium in April 2013.

References

Schneider, B., Abu-El-Haija, S., Reesman, J., & Pea, R. (2013). Toward Collaboration Sensing: Applying Network Analysis Techniques to Collaborative Eye-tracking Data. ACM International Conference on Learning Analytics, LAK  ’13 (pp. 107-111). Leuven, Belgium: ACM.

Schneider, B., & Pea, R. (2014). Toward Collaboration Sensing. International Journal of Computer-Supported Collaborative learning, 9(4), 371-395.

1 Comment

    Hello! I’m very interested about this article, because I’m researching about the human interactions into a network to exchange health information to prevent deseases and improve the life quality. I’m a PhD student from Brazil and I arrived here throught the Lemman website, where I was searching more information about the scholarships process. I’d live to know more about your work, If possible I’d like to use your concepts in my research. Best regards,

Leave a Reply