“We use only 10% of our brain capacity” – ridiculous, right?

An outcry from the scientifically righteous swept the media when Luc Besson’s latest action flick “Lucy” turned out to hinge on the old “myth” that humans use only 10% of their brain capacity. Another dumb Hollywood product that gives a damn about getting the facts right, and another disservice to public education in science. That’s what many critics must have thought, and some stepped in to set things right online. But, as we’ll see, it’s not so clear who is doing more of a disservice to the public, Hollywood or its detractors.

In order not to add to the confusion, we must distinguish between two forms of the claim: that humans use only 10% of their brain capacity, where capacity is any kind of functional ability based on any kind of brain substrate. This is the wide version of the claim. And that we use only 10% of our brain capacity by only using 10% of our brain’s anatomy. This is the narrow version.

Some of the critique of the “Lucy” movie is aimed against the narrow version because it is featured in the film’s production notes, although I don’t know if the movie itself is based on it. In any case, the narrow version is only interesting insofar as it is a special case of the wide version, i.e. insofar as the concurrent activity of more than 10% of our brain areas would lead to the unlocking of untapped functional potential. I take this scenario as the truly interesting question about the 10% claim, and what critics are at least implicitly aiming at when they discuss it. Thus, I will be focussing on the wide version here.

The claim about the general underuse of our brains has been called a myth, but I’m calling it a thesis, because from a scientific point of view, that’s what it is. The label ‘thesis’ also has the advantage of not suggesting that the claim has already been decided on one way or the other, whereas “myth” suggests that its falsity has been established. But – the 10% thesis is not demonstrably false, it is merely unproven: science has never shown that we use only 10% of our brain power.

If it were only that, however, the fuzz over “Lucy” would be hard to understand. Instead, the skeptical responses go further than that: they not only assert that the thesis is false because it has never been proven right, they (implicitly) claim it is false because it can be shown to be false. And although the distinction may seem sophistical, these are two importantly different things, because the one is true, while the other is false. In other words, it is true that science has not shown the 10% thesis to be right, but it is false that science has shown the 10% thesis to be false. The thesis could still be right. It is an as yet insufficiently researched hypothesis. To claim that we know it is false is to paint an unduly optimistic picture of the scope of current scientific knowledge.

Let’s look at two responses that do exactly that, one from the blogging community,  one from the scientific media. The first counterargument by Steven Novella at the Skeptics’ Guide to the Universe is to invoke the logic of evolution:

Humans have the highest encephalization quotient (brain to body size normalized for mammals of our relative size) of any animal… Evolution would not have favored such a large brain if it were not adaptive…. Caring for and developing such a large organ comes at a cost… If all our ancestors had to do was use a greater percentage of their existing gray matter, evolution would likely have favored that path… In anything, evolution is efficient…

 

Evolutionary thinking is also brain scientist Barry L. Beyerstein’s first line of defense in the Scientific American:

[T]he brain, like all our other organs, has been shaped by natural selection. Brain tissue is metabolically expensive both to grow and to run, and it strains credulity to think that evolution would have permitted squandering of resources on a scale necessary to build and maintain such a massively underutilized organ.

 

Unfortunately, this what-would-evolution-do mode of reasoning is as flawed as it is popular. It has been exposed under the name of “adaptationism” already 35 years ago by evolutionists Stephen J. Gould and Richard C. Lewontin.1 It has been the subject of an extensive literature in the philosophy of biology2, of which most scientists and science writers seem to be unaware. The basic fallacy is twofold: to assume every trait of an organism must be an adaptation, and to treat evolution by natural selection as a process that maximizes the efficiency and effectiveness of such traits, finding optimal solutions without wasting resources. But evolution is no such thing. Think about it: in order to use resources economically, natural selection would have to have the ability to plan ahead, to weigh different alternatives without actualizing them. These are features of a human mind, not of a natural process. Darwinian evolution has no way of foreseeing when it is going in a non-optimal direction. As long as a trait manages to serve a fitness-enhancing function (or to at least not reduce fitness), any kind of kludge will do, no matter how inefficient and wasteful. Evolution does squander resources, gloriously, and without looking back (or ahead).3

Natural selection does not follow a “design logic” that would license inferences like “evolution would not have done X because it is bad engineering”. Evolution has no purpose. Indeed, that realization is exactly what sets the Darwinian worldview apart from religious postulations of a God with a plan. When the modern heralds of science speak of evolution in teleological language, they revert back to the old theological mode of thinking, where things were explained by purposeful forces. I’m not sure these are the people we should be getting our science from.4

The upshot is that evolution might indeed have created an organ that operates way below its limits. Optimizing its performance would have been entirely dependent on there being an alternative variant available that operates more efficiently, but there’s no guarantee that such a variant would  have ever come up (note that natural selection itself does not create variation in traits, it depends on other processes doing that, and these processes do not systematically traverse the space of all possible phenotypes).

Novella’s next two arguments are from the domain of brain science, but frankly, I do not even understand how they are relevant. First, he says, the brain is a “hungry organ”, using 20% of our metabolic resources, and it takes a s little as a temporary drop in blood sugar or pressure to disrupt its functioning. Second, we know what most of the brain does. Its areas have been largely mapped to different functions, such as language, vision, and motor control.

Do the facts that the brain uses a lot of energy, sits on the precarious verge of being underpowered and has been extensively functionally mapped5 imply that it uses more than 10% of its capacity? No. Granny has an old car, it consumes preposterous amounts of gas and still doesn’t drive very fast. Plus, I know what its parts are doing, that’s why I know that if she only fixed that leaky fuel pump her ride could be dope.

Novella continues with the claim that brain function is already “maxed out”: adding a second to a primary task often causes a disruption of performance, indicating that the brain is already operating at its limits and there is no dormant potential to tap into. But that could be just inefficient “design”, something typical of products of evolution. It doesn’t mean there are no additional resources that could somehow be made available.

In fact, there are lots of ways in which the brain or, more generally, a technical system could have considerable dormant capacities. Twenty years ago, a communications network based on copper wires would have exchanged data at around 30 Kbit/s. Today, thanks to ADSL, the same connections would carry at least 1000x times that amount. Do you think it would be fair to say that the copper wire had vast untapped potential?

Or consider algorithms: the same computer can be vastly more efficient in sorting a list of N items depending on what sorting algorithm it is running. Some algorithms finish the task in a duration that increases linearly with N, while others take an amount that is proportional to N2. For 10 items, the difference in time is 10-fold, for 1000 items, it is 1000-fold. That is enormous. Brains probably run algorithms, too. Maybe, how intelligent you are depends on the kinds of algorithms your neurons implement. Consider the startling differences in cognitive ability between people of low and high IQ. Brain anatomy usually reveals nothing, so the difference must reside in properties not easily observed. Many of today’s neuroscientists would place their bets on functional connectivity, the ability of brain regions to form momentary fluid neural aggregates which distribute the processing load. This is nothing to be seen in a current-generation brain scanner. Thus, routine measures of brain activity such as fMRI and PET might not pick up on aspects of neural circuitry crucial to differences in cognitive performance. It is not even legitimate to say of these techniques that they measure brain activity per se. There are many levels  on which our mental organ is active and the sort of activity captured by neuroimaging (e.g. blood oxygenation levels in fMRI) may or may not be relevant to a particular performance capacity. Already the mere fact that these methods do not distinguish between excitatory and inhibitory neural firing makes simple interpretations of their results impossible. The dorsolateral prefrontal cortex has “lit up” in the scanner when you chose between Coke and Pepsi? Maybe it means you thought hard about it. Maybe it means you suppressed thinking about it.

The problem of not knowing what really makes our brains brainy is put into stark contrast by neurological case studies. In 1980, Science reported on hydrocephalic patients examined by British neurologist John Lorber.6 One group had lost over 90% of their brain tissue, and yet, half of these patients had IQs over 100. Particularly remarkable was the case of a young student who, in Lorber’s words, “has an IQ of 126, has gained a first-class honors degree in mathematics, and is socially completely normal. And yet the boy has virtually no brain.” Lorber estimated that where there should have been 5 cm of cortex, the boy had only a 1 mm thin layer of neurons, and his brain weight was probably about a tenth of the normal 1.5 kg. His conclusion: “[T]here must be a tremendous amount of redundancy or spare capacity in the brain, just as there is with kidney and liver.” He’s backed up by Britain’s renown neuroscientist Colin Blakemore, who explains that because the brain has to frequently cope with minor lesions, “it’s crucial that it can overcome these readily. There may be some reorganization of brain tissue, but mostly there’s a reallocation of function.” In other words, redundant brain matter must be available for assuming the functions of other parts lost to brain damage.

Cases like Lorber’s are certainly evidence for unused capacities in human brains. But the point here is not to argue that they are proof of the 10% thesis, but merely to show that the thesis is entirely consistent with what we know about the brain. It is far from being disproven.

With “Lucy” in mind, somebody might object that the kind of immediate brain boost featured in the movie is on an entirely different level than the slow, long-term reorganization of functional neuroanatomy in hydrocephalics. And they would be right. The two are not the same and who’s to say how much more likely one is than the other. But the objection is irrelevant to the general form of the 10% thesis targeted here, which doesn’t state anything about how or even whether the brain’s dormant potential can be tapped into.

Nevertheless, it cannot harm to point out that there are real-life examples of quite spectacular increases in cognitive power that happen in quite a short time. I’m not referring to the once much-hyped promise of neuro-enhancement, although some drugs do improve cognitive performance in some situations. Rather I’m thinking of the switch between depression and (hypo-)mania characteristic of bipolar disorder. The change from mind-numbing agony to sparkling excitement can happen within a couple of days or less, and the attendant improvement in cognitive function can be dramatic. The familiar connection between genius and madness is mostly due to the enhanced creativity that comes with a manic mind. Undoubtedly, the works of some of our intellectual and artistic heroes have greatly profited from a manageable measure of mania.7 Maybe some of you have personally experienced the startling release of mental energy that comes with this condition (apart from the suffering it also causes). The leap from a depressed to a manic state is a real-life example of a brain somehow tapping into a normally unused potential.

Now then – on to a bleak recapitulation. Skeptics have promoted a misjudgment of the 10% thesis, asserting that we know it is false, when the only thing we know is that it hasn’t been shown to be true, which amounts to not much at all. The arguments given against the thesis by one prominent skeptic and a scientist writing for the Scientific American are abysmal. The argument about evolution being generally efficient shows a complete misunderstanding of basic evolutionary principles. Novella’s neuroscientific objections, to the extent they’re even relevant, are easily dismissed.

On the other hand, there are many ways in which the brain could harbor untapped potential. The world of computers provides some hints: the increase in data transfer capacity of the copper wire, or the difference in the efficiency of algorithms. What current brain imaging techniques measure as “brain activity” is just one tiny aspect of all the things going on in a brain at any one time, and not even that aspect is always clearly interpretable (e.g. does an fMRI signal reflect excitatory or inhibitory activity?) Thus, to observe, for instance, that all areas of the brain show some level of activity in an fMRI scan does not prove that the brain is operating at full capacity. There’s more to the brain than meets the scanner.

I’m sure all these bad arguments are indicative of some larger malaise. Maybe it’s too easy to become a skeptic (or a scientist, for that matter), and there’s no quality control; or maybe skeptics copy too much from other skeptics and mistakes get repeated. Maybe skeptics, in their laudable effort to teach science to the public, overstate the certainty of scientific knowledge; or maybe skeptics, like the rest of us, sometimes have a hard time activating the unused 90% of their brains. As tempted as I am to explore these possibilities right now, I leave them to you to ponder. I have to go now. I’ve got tickets for “Lucy”.

 

 

References

1Gould SJ, Lewontin RC (1979) The spandrels of San Marco and the Panglossian paradigm: a critique of the adaptationist programme. P Roy Soc Lond B Bio 205, 581–598.

2One could point to many scholarly articles here, but the best place to learn about the essential contingency of the evolutionary process is Stephen J. Gould’s collection of essays in natural history. If you’re going to read just one thing about evolution, read Gould.

3Examples are the embryos of several species (e.g. platypus, baleen whale) that grow teeth only to lose them as adults; the (extinct) giant deer’s 4 m wide, 40 kg heavy antlers; or the Brazilian green turtle’s annual 3000 km journey to a tiny Atlantic island to breed, when suitable breeding grounds could be found much closer to home.

4As an aside, note Novella’s claim that “[e]volution would not have favored such a large brain if it were not adaptive”. Since being adaptive in this context is defined as being favored by evolution (more precisely, by natural selection), this statement is a mere tautology with no information value. It amounts to saying “evolution would not have favored such a large brain if evolution had not favored such a large brain”.

5Note that the localization approach to brain function is considered backwards in some quarters (“Neophrenology”). Fluid, distributed connectivity approaches are all the rage now (see e.g. Tononi’s Integrated Information Theory of consciousness)

6Lewin R (1980) Is your brain really necessary? Science 210, 1232–1234.

7In a 2008 paper (Gamma et al.) I wrote: “The idea that creativity is intimately linked to madness has been common both among antic and modern thinkers. According to this view, many outstanding personalities owed their productivity and intellectual or artistic achievements to being melancholic, manic-depressive or hypomanic. Systematic study of this phenomenon has been rare, but has substantiated the existence of a connection between bipolar disorder and creativity (Andreasen, 1987; Jamison, 1993). In particular, the case for a link between hypomania and entrepreneurial productivity, creativity and success has recently been made by Gartner (2005).”

Gamma A, Angst J, Ajdacic-Gross V, Rössler W (2008) Are hypomanics the happier normals? J Affect Disord 111, 235–243.doi:10.1016/j.jad.2008.02.020
Andreasen NC (1987) Creativity and mental illness: prevalence rates in writers and their first-degree relatives. Am. J. Psychiatry 144, 1288–1292.
Jamison KR (1993) Touched with Fire. Free Press, New York.
Gartner JD (2005) The Hypomanic Edge: The Link Between (A Little) Craziness and (A Lot of) Success in America. Simon & Schuster, New York.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: