Bertrand Schneider


An augmented small scale brain to prepare students for future learning in neuroscience.

The goal of BrainExplorer is to make neuroscience concepts accessible to a broader audience. We argue that the spatial nature of the brain makes it an ideal candidate for hands-on activities coupled with a tangible interface. Also, we wanted to use this learning environment as a way to prepare students for future learning. Our system allows users to discover the way neural pathways work by interacting with an augmented model of the brain. By severing connections, users can observe how the visual field is impaired and thus actively learn from their actions.  The following figure shows the technical setup of BrainExplorer:

Given that the brain is a dynamic 3D system, we propose that it is extremely difficult to teach neuroscience with standard tools. Our system, BrainExplorer, takes advantage of recent technological developments, including infrared camera technology and webcam-based tracking, to create a tangible user interface to study the human brain.

Our learning goal is for students to learn about the structures involved in processing visual stimuli, their spatial locations in the three dimensional brain, how information is processed in the visual system, and what effects specifically localized lesions might have on a person’s visual field. Our system was targeted to a wide range of educational settings. Its primary purpose is as an inquiry learning tool for middle and high school students to engage in scaffolded investigation about the brain.

To test the relevance of our system for neuroscience education, we conducted an experiment with 28 Participants (13 males, 15 females; average age = 28.2, SD = 5.7). Half of the participants interacted with the system for 15 minutes, took a learning test, read a textbook chapter for 15 minutes, and finally took a post-test. The other half of the participants followed the same procedure except that they read the textbook chapter first and then used BrainExplorer. The design of the experiment is summarized bellow:

Our results suggest that participants not only better learn with BrainExplorer, but also benefit more from the table if they use it before reading a text on the same topic. Those results have several implications for the design of educational tangibles and learning activities related to neuroscience. The results are shown bellow:

First, our findings suggest that BrainExplorer better supports knowledge building than a paper version of the same learning material. Future studies should isolate which component of BrainExplorer explains most of the variance of this outcome: did our system outperform the paper activity because users were able to explore the domain at their own rhythm or because the 3D physical representation is more appropriate for learning concepts related to the brain?

Second, we found that properly sequencing learning activities when using a tangible interface is crucial for knowledge building. Participants in both conditions went through the identical material. The only difference was that they completed the two activities in a reverse order. Participants who used BrainExplorer first and then read a text significantly outperformed the group who read a text first. This result means that learning activities do not have additive effect: they are not interchangeable. On the contrary, learning activities interact in complex ways. Our results suggest that anchoring new learning with sensori- motor activities provides a better foundation for future learning.



Prof. Paulo Blikstein and his teaching team, for providing a constant support during the development of this project. In particular, this project would not have been possible without the help of Claire Rosenbaum. Jenelle Wallace also participated in the conceptual and physical design of this learning environment.


Project page on the TLTL (Transformative Learning Technologies Lab) webpage.

Final Project Document for the BBA (Beyond Bits and Atoms) class.


Schneider B., Wallace J., Pea, R. & Blickstein P. (submitted). Sequencing Hands-on Activities on a Tangible User Interface: the Case of Neuroscience. IEEE Transactions on Learning Technologies.

Schneider, B., Wallace, J., Pea, R., & Blikstein, P. (2012). BrainExplorer: An Innovative Tool for Teaching Neuroscience. ACM International Conference on Interactive Tabletops and Surfaces, ITS  ’12 (pp. 407-410). Boston, MA, USA: ACM.
Below you can find the evolution of the project (this table was built in two weeks as the final project the “beyond bits and atoms” class taught by Paulo Blikstein at Stanford).

Evolution of the project


Day 1 (Saturday): learning a lot of processing (basic stuff)


Day 2 (Sunday): still learning a lot of processing (libraries; TUIO and a physics library)


Day 3 (Monday): first trial with Tags and TUIO. Software side: basic springs between tags with the physics library. Brainstorming on how to mount the brain on the supports.


Day 4 (Tuesday): Building the physical support (table with a semi-transparent surface, camera, projector,…)


Day 5 (Wednesday): trying to get the tracking of the tags more efficient (e.g. with IR markers). No success (Bertrand spent quite some time testing Collin’s multitouch table to improve the way fiducials were tracked). The solution: getting bigger tags on vinyl stickers. Deciding on what should be the final setting: table built with acrylic sheets, projector from the SLATE system, camera to be bought (the logitech one is not very good).

The evolution of the tags (from left to right): paper, paper on acrylic, mirror acrylic, vynil sticker on acrylic

Final iteration: use the laser cutter to engrave the acrylic on one side to make the surface look frosted and minimize reflection through the tags


Day 6 (Thursday)

Jenelle is working on the physical part (putting magnets in the brain, building the supports for the tags, making all of the feducials, and so on); Bertrand is working on the software (visualizing potentials travelling across the brain). Brainstorming on how the next steps of the project.

Jenelle putting magnets on the brain and building the supports

Bertrand programming links between the tags


Day 7 (Friday):

– (Bertrand) trying to calibrate the webcam and the project; more difficult than planned. Fixing bugs in the software.

– (Jenelle) working on the supports for each brain part.

Also brainstorming on a conceptual level: a better representation of the axons should look like this:

Planning on adding Myelin sheath and Schwann’s cells to the axons

Next iteration on how to visualize an axon:

two brain parts, with the axon of a neuron stretched between them


Day 8 (Saturday):

Jenelle worked on the supports, and is almost done with it.
Bertrand fixed a few bugs with the way axons are visualized (thanks Shima!) and build a table for the system.


Day 9 (Sunday)

After several days of gluing my fingers together with five different types of glue (superglue, epoxy, gorilla glue, acrylic glue, superglue) I (Jenelle) finished the supports!

[Note: Acetone is great for dissolving almost all of the glue types listed above.]

Several useful tips about gluing that might seem self-evident but require more planning than I thought to deal with the uneven surface of our brain model: 

– If only using one support, try to put it at the balance point.

– Maximize the surface area of contact between the supports and the pieces.

– Use epoxy when space-filling glue is necessary – if the pieces don’t fit exactly to the shape of the supports.

– Top off with superglue around the edges of the epoxy – it seems to be stronger.


Day 10 (Monday)

The brain is now forming a network!


And we had our first user 🙂

Several users suggested that it was a bit distracting to move back and forth between the physical interface and our computer program when looking at cutting the connections. Based on this feedback, we decided that it would be nice to have the user interact with the physical system using an IR pen whose input could be picked up by a Wii remote. Therefore, Bertrand began working on getting the Wii remote to connect to the computer while Jenelle made an IR pen based on this design:
Day 11 (Tuesday)
Bertrand got the final setup for the projector working–we decided that it would be best to have the projector underneath the table rather than mounted above as we had originally planned. However, we realized that this led to another problem with our system: we were planning to project images on the surface of the brain, and we obviously couldn’t do this from underneath. We had already figured out that we couldn’t project directly onto the brain surface since it was too uneven to focus, but we had an idea of giving the user a “magnifying glass” that he or she could hold above the surface of the brain, and then images could be projected onto it. Unfortunately, with our new projector set-up, this idea was no longer feasible.
After some discussion, we came up with the idea of displaying the image underneath the selected brain piece. Jenelle noted that one thing she has had difficulty with in her study of neuroscience is relating the brain sections that researchers use to stain and study different regions with the 3D structure of the whole brain. Therefore, she came up with the idea of having a functionality where the user could scroll through brain sections from top to bottom and locate the region of interest.
We took images from an online brain atlas:
brain atlas screenshot
It took a bit of work to coordinate the images from the atlas with our physical model. Looking at both horizontal sections and coronal sections helped.
For the first prototype, I decided to just make horizontal sections to correspond with each piece. Of course, the final version would have horizontal, coronal, and sagittal sections. The work involved importing the images into Photoshop, removing the backgrounds (so the images would look nice on the screen), adjusting the colors, and pinpointing the regions involved in the visual pathway. Here’s one example image:


Day 11 (Tuesday)


Bertrand spent the day working on using the wiimote and improving the visual representation of the visual pathways of the brain. The user can now cut a connection with an IR pen:


Day 12 (Wednesday)

Building the user interface: there is now three buttons on the top of the screen, where you can select different modes:

  • Visual pathway, which displays simplified connections to highlight how information travels from the eyes to the visual cortex
  • Network, which is the default mode
  • Structure, which displays horizontal slides of the brain

Day 13 (Thursday)

We worked on the final calibration of the system in the atrium where the expo would be. We had the amazing realization that sunlight contains IR light (duh)! After lots of different solutions (we tried putting posterboard and fabric around the edges of our table), we decided that a black fabric shield was the only fix.

Below is a screenshot of the “structure” mode, where the user can go through the different slices of a specific part of the brain by moving the IR pen on the image.

We also loaded the brain slice images into the program, and struggled a bit with resizing them and getting the orientation correct (the projector flips the images horizontally, so the text was all backwards).


Day 14 (Friday) – The Presentation

We worked on the final calibration of the system in the atrium where the expo would be. We had the amazing realization that sunlight contains IR light (duh)! After lots of different solutions (we tried putting posterboard and fabric around the edges of our table), we decided that a black fabric shield was the only fix.

Above is Bertrand demonstrating the system to someone. Notice that there is a webcam between the two eyes: what the brain perceives is displayed on the bottom left corner of the table, thus the user can directly see how cutting different connections affect what the brain sees.


    The idea of a concept like Brain Explorer and the results that it produces is exceptional. I have 2 points to make, this could be a new opening in the world of learning. Mankind have always tried to develop and evolve itself from dates long back, the result’s a tangible user interface could do to serious study like neuroscience is so brilliant that the impact it could do to kids going to school would be amazing, this would obviously reach them in a better way than ever before, there by changing the whole way we see education and learning. People learn faster and this would pave ways for newer research and improvement for science. Another important factor I observe is that, this could change the whole way we perceive technology and interfaces, interface which is accurate, and digital but which could also be felt by hands is simply brilliant. There have been many problems due to analog systems in use and the precision of error detection using such a system, and also many firms are not daring to use digital systems because the interface is too complicated or not user friendly, adapting those systems to TUI would improve the error correction since its digital but would also give the native feel to the user. Its always important that the user gets feedback, imparting TUI to systems which is complex would make the system look simple yet being complex. Imagine the effects a TUI system could do for space exploration or for the army, or even at high-end research organisations, de-clustering the devices but also making them more precise, the whole disadvantage of using analog system could be removed. Great work guys.

  • Wow Bertrand

    AMAZING write up and skills you have. Thanks for not only showing us the END result, but how it was done!


Leave a Reply