Google+

psychology

Neuroscience’s evil twin: The Neuromyth

“Effective teaching might be the hardest job there is”

– William Glasser

With the advent of neuroeducation, an offspring of neuroscience and psychology that informs educational policy, educators are being bombarded with various new findings- all promising magical results and startling discoveries. It is glorious to think of scientists in lab coats using brain activations to tell layman what happens when they learn or remember, but herein lays its very danger. Much like the game of Chinese Whispers, things begin resembling the truth lesser and lesser with each passing minute. And just like that, neuroeducation switches to its ugly alter ego of neuromyths.

A neuromyth is “a misconception generated by a misunderstanding, a misreading, or a misquoting of facts scientifically established (by brain research) to make a case for use of brain research in education and other contexts”. Neuromyths are becoming a hindrance to the education system worldwide, and the ways in which they arise are numerous (Pasquinelli, 2012).

Scientific facts, when distorted, turn into neuromyths. For example, a popular myth states that children learn better when they are taught by their preferred learning style (which can be visual, auditory or kinaesthetic); and this myth is based on the finding that these modalities are based in different parts of the brain. This however, ignores the fact that these regions are highly interconnected and that children do not actually process information better when they depend only on one modality. Therefore, scientific facts can be oversimplified and then misinterpreted.

Neuromyths can also be the result of actual scientific facts that have later been disproven. A prime example would be that of the Mozart Effect- that listening to classical music boosted one’s IQ points. This was quickly debunked, as studies failed to replicate it.

Finally, and most commonly, neuromyths can be because of the misinterpretation of scientific results. A good look at the idea of ‘critical periods’ of learning (that certain types of learning only occurs during certain times in life, especially childhood) exemplifies this. However, it is now seen that although there are prime ages for learning (eg. Acquisition of words, distinguishing between visual stimuli), this is hardly set in stone.

Teachers, or educators are more likely to fall susceptible to neuromyths possibly because of the sheer amount of information they encounter about the brain, both correct and incorrect. It could also be a backfiring effect, as teachers who are more eager to implement these neuroscientific findings out of sheer goodwill often come across neuromyths because they look for quicker solutions. What darkens the picture is the fact that neuroscience novices are no better than laypeople at distinguishing fact from reality, it is only the experts who are able to do so!

Resolving the issue of the perpetuation of neuromyths (and the horrors of products like the Brain Gym that still exist despite having no scientific backing) would be a two-way street involving increased communication from both parties: educators as well as neuroscientists. Neuroscientists need to make sure that translations of their work in the media are not miscontrued, and developers of educational products need to hire educational consultants who have credentials in the field of neuroscience. On the other hand, initial teacher training for educators should necessitate looking at findings with a critical eye, and not judging any article with brain images as more scientific (as people are found to!).  

It’s quite often that one hears of the common saying of half-baked knowledge being a dangerous thing. This, however, is much more frightening when put in regard to people who are expected to dispense knowledge- our teachers. Being consumers of knowledge, a critical appraisal of the product we consume is therefore essential.

Sneha Mani


Looking for the Good Samaritan

Meerkats. These adorable, furry animals are well known for their role in the BBC produced animated film, The Meerkats. What’s more, meerkats are also often used as model animals for their altruistic behavior.

These animals are famously known to stand guard while other members of their gang forage for food. If they detect any threats, the sentinels call out to the rest of the meerkats, which then run to nearby hiding places. It seems that they are risking their lives to protect their groups. However, a 1999 report showed that guards are the first to flee after sounding an alarm and that sentinels, in fact, are positioned such that they have the most time to reach safety. Therefore, what does altruism really mean and do meerkats have a hidden agenda?

Altruism is defined by Daniel Batson as “a motivational state with the ultimate goal of increasing another’s welfare.” This is differs from egoism, the ultimate goal of which is individual benefit. It seems that the meerkats who stand guard as sentinels might therefore be acting with egoistic rather than altruistic motivation. But what about humans? Can they ever be truly altruistic? Does the Good Samaritan, a person who helps others without an expectation of a reward, exist?

There are several theories that explain prosocial and altruistic behavior in humans. One such model is the Empathy-Altruism hypothesis developed by Daniel Batson. According to this theory, when a person sees another person in need, they might help the other person either to reduce their own distress or if they feel they might be rewarded for their service. There is a third possibility, however. They may feel empathetic towards the person in need, and in that state they are willing to help the other person regardless of what they might gain. Reducing the other person’s suffering becomes the most important goal, indicating that it is possible for humans to behave in a truly altruistic manner.

On the other hand, evolutionary theories on altruism suggest that humans do behave selfishly, even when helping another person. For example, according to the Kin Selection hypothesis, humans help others, especially their descendants. Although they may be reducing the chances of their own survival, they are increasing the probability of their genes being passed on to the next generation. Moreover, the degree of helping behavior increases if the concerned individuals are closely related. It is therefore also not surprising that people are more willing to help those whom they perceive to be more similar to rather than different from themselves.

However, humans have behaved altruistically and have helped complete strangers in the past. For example, Patrick Morgan risked his life by jumping down under a stationary train in Sydney, Australia to save an elderly woman who had fallen there. But he said that he was simply doing what he thought anyone else would do. Similarly, Vishnu Zende, the railway announcer at Chhatrapati Shivaji Terminus saved hundreds of lives by alerting commuters to leave the station during the 26/11 terrorist attacks in Mumbai.

I guess the Good Samaritan does exist. And in any case, helping others is always good, even if you do it to help yourself.

Kahini Shah

Free Will: Is this the real life… Is this just fantasy?

“You say: I am not free. But I have raised and lowered my arm. Everyone understands that this illogical answer is an irrefutable proof of freedom.”
― Leo Tolstoy, War and Peace

When you were getting ready in the morning, did you choose the colour of the shirt you would wear today? Do you arbitrarily choose what dish you want to order at a restaurant? Did you voluntarily choose the subject of your undergraduate degree?

All these questions refer to the choice we have, while making decisions in life. The ability to make that choice is what we call free will. Free will is the ability of an agent to select an option (a behavior, an object, a course etc.) from a set of alternatives (Mick, 2008) .

Most people believe that they have free will, and they control their decisions. However, many psychologists and philosophers refute the idea of free will with the idea of determinism. Philosophy defines determinism as a notion that every event or state of affairs, including every human decision and action, is the inevitable and necessary consequence of the antecedent states of affairs. Putting it in simpler terms- all our actions are pre-determined, and we do not really have the freedom of ‘choosing what we want to do’. Daniel Wegner, in his book The Illusion of Conscious Will (2002), states that free will is just an illusion, and we attribute it to be the cause of events, whose actual causes we don’t really understand.

To test whether people really have free will, Benjamin Libet (1982) conducted an experiment, where he measured cerebral activity, and found out that freely, voluntary acts were preceded by a specific electrical change (readiness potential ‘RP’)  in the brain that began about 550 ms before the act. Human subjects became aware of the intention to act 350–400 ms after the RP began, but 200 ms before the motor act. He therefore concluded that the volitional process was initiated unconsciously (through a set of neurological functions), but the conscious function could still control the outcome. Hence, even though his experiments indicated that the choice made by people was not random, but predetermined, the existence of free will could not be completely eliminated.

Furthermore, even in the field of criminology, there is a long standing argument that we do not have free will; therefore, a person’s criminal behavior is determined by environmental, biological and social factors. Many have used this argument in the court of law to justify transgressions. But arguments of this nature haven’t found many takers in the legal system. For instance, American courts are not likely to warm up to the idea of genetic causes behind crime, and tend to stress on the fact the people have the free will to choose between right and wrong (Jones, 2003). Hence, even within the justice system, there is no clear acceptance or rejection of free will.

But what if we do indeed make choices consciously, and only come to know about them much later? Does that mean free will really exists, but we all have misinterpreted how it exactly  happens?  Holton (2004), in his review Wegner’s book The Illusion of Conscious Will (2002) stated that precursor events might be genuinely mental events of which the subject is not aware about until later. This leads to a whole different set of ideas about how free will operates.

People have argued strongly for and against free will, but none seem to have reached a consensus about it. One of the oldest questions to plague the field of psychology still remains unresolved. When I decided to write about this topic, was it my freedom of choice or was it some predetermined set of biological and neurological functions?

Sampada Karandikar