Another strike against the “broken mirror” theory of autism
The “broken mirror” theory of autism, most famously explained by famous neuroscientist V. Ramachandran, argues that autistic people have difficulty understanding other people’s intentions at a higher level because at a lower level, they do not perceive other people’s gestures the same way others do.
"Mirror neurons,” first discovered in monkeys, are neurons that fire both when we perform an action ourselves and when we see someone else performing the action. They respond to specific actions, and not just patterns of stimuli—one group of neurons will fire for reaching, another group for grasping, etc.
The idea is that we process other people’s actions by mentally simulating them, by generating the same brain state we would have when performing the action ourselves, thus literally “putting ourselves in someone else’s head.” Since autistic people do not automatically understand other people’s mental states from observing their behavior (they often have to be told what others are feeling), it would seem to follow that something has gone wrong with their mirror neuron system.
Notice that this theory assumes that imagining yourself performing an action is all the information you need to take someone else’s perspective. But a 2009 study by Stewart Mostofsky and colleagues at the Kennedy Krieger Institute suggests that not only is this information insufficient, it may actually be counterproductive.
Dr. Mostofsky had 14 autistic 10-12 year olds and 13 typically developing 10-12 year olds learn to use a novel device, a robotic arm which they held in their hand and reached with to capture “animals that had escaped from a zoo.” The robot produced a force field that perturbed the children’s arm movements, and the children had to learn to adjust their movements to control the tool and capture the animals. Notice that learning to use the robotic hand required mastering two sets of cues: visual cues (watching the arm move closer to the animal) and proprioceptive cues (the feeling of the force field and the feeling of the body when in the correct reaching position). Proprioception is the body sense, the feeling of the position of muscles and joints. Dr. Mostofsky wanted to know whether autistic as well as typically developing children could learn from both sets of cues.
Dr. Mostofsky explains what a defender of the “broken mirror hypothesis” of autism would predict:
When we observe another person performing a movement, the internal models to execute the same movement may also be activated in our brain… Indeed, after volunteers observe another person reach while holding a robot that is producing a force field, they perform better than naive volunteers if they are tested on the same field. This is consistent with the hypothesis that observation of an action instantiates the same internal models that are required for production of that action.
In other words, when typically developing people watch someone perform an action (visual input), this activates their internal model of performing the action, which includes what it feels like to perform the action (based on proprioceptive input). But what if autistic people can’t use visual cues to activate their proprioceptively-based internal models?
Because this instantiation relies on visual cues, internal models that place a greater than normal reliance on proprioception, while discounting visual consequences, might place the observer at a substantial disadvantage in understanding other people’s actions and imitating their movements. To test our hypothesis, we looked for correlations between how the children represented our simple reaching task and clinical measures of motor, imitation and social function.
Mostofsky found that typically developing children generalized from both the visual and proprioceptive input, while autistic children generalized ONLY from the proprioceptive input. They did so twice as strongly as the typically developing children. In other words, autistic children were not relying on visual information to learn to use the new tool, but they were relying extremely heavily on their body sense.
Furthermore, the more children generalized from proprioceptive cues, the more impairments they had in general motor function, social interaction, and the ability to imitate others’ gestures.*
(This study would have been stronger if they had had autistic children observe people using the tool before attempting using it themselves. This would have directly demonstrated whether autistic children can learn from visual cues in the absence of proprioceptive ones, the way typically developing children do).
Be that as it may, these autistic children had a very strong sense of what it felt like to perform these movements, so one would think their ability so mentally simulate these actions would be intact. In other words, their mirror neurons were probably fine. Yet they had difficulty understanding and imitating other people’s gestures. So what was missing? Not mirror neurons, but the ability to learn from visual cues.
Haswell, Courtney C., Izawa, Jun, Dowell, Lauren R., Mostofsky, Stewart H., & Shadmehr, Reza (2009). Representation of internal models of action in the autistic brain. Nature Neuroscience.
*This study clarifies a puzzling earlier Mostofsky study I blogged about, which found that the more participants relied on proprioceptive input, the worse they did on a motor learning task. Most likely, proprioceptive ability does not interfere with performing motor tasks (that would be ridiculous). Rather, large amounts of it seem to accompany poor visual ability, and that lack of visual ability interferes with performing motor tasks. (Perhaps the unusually high proprioceptive ability compensates for the lack of ability to learn from visual cues?)