Thursday, May 31, 2007

British brain sex

Illustrating the power of online research methods, the journal Archives of Sexual Behavior has recently published a variety of papers on sex and cognition using data gathered from over 200,000 people via the BBC website. An introductory editorial can be found here.

One of these studies, by Maylor et al., discusses four experimental tasks that have been associated with sex differences in the past (mental rotation, line angle judgment, category fluency, and object location memory) that were represented in the vast BBC dataset.

The number of participants is staggering when compared to the usual cognitive psychology study, and not surprisingly is skewed in favor of the younger age cohorts.

The results in a nutshell: the men outperformed the women on mental rotation and line angle judgment, and the women outperformed the men on fluency and object location memory. (Of course, how much of these sexual dimorphisms would be inherent to the sexes in the absence of a strongly sexually dimorphic culture is a much larger question.)

Further, when looking at the data broken down by sexual orientation, gay and bisexual men and women had average scores that were less sexually dimorphic those of the heterosexuals. For instance, gay women scored better on mental rotation than did straight women (see Figure 3A left in the paper), and gay men scored better on fluency than did straight men (Figure 3C right). This less-dimorphic pattern doesn't hold up across the board, though; Figure 3D suggests that the gay participants might be finding le mot juste more often regardless of whether they were men or women.

The data plots across all of the experiments show a large decrease in scores with increase in age, and as the title of the paper suggests, the authors make much of the different patterns of decline between the two sexes. One thing that bears mention regarding this kind of study is that this isn't a longitudinal study in which the same people are followed over time. Rather it's a cross-sectional study in which the different age groups are represented by people who are products of different eras. As an example, if you consider the education of the people participating in this study, you find that the younger participants are more educated, and further that the difference in education between men and women is bigger for the older generations. Something to keep in mind when considering arguments about different degrees of cognitive "decline" in men and women.

Nonetheless, this and the other studies represented in the special issue bode pretty well for the future of online studies. Where are my 200k participants?

Oh, I almost forgot the reason I found this article to begin with. Judy Skatsoon of the Australian Associated Press seems to think that performance on mental rotation tasks is the same as map-reading:
It has long been a tedious joke that women are bad at reading maps, but there could be some truth in it. A new study into the mental skills required to read a map has handed blokes new ammunition and dealt heterosexual women a final indignity.
Oh, for shame, Judy. Carol Lloyd of is having none of that.


Maylor, E. A., Reimers, S., Choi, J., Collaer, M. L., Peters, M., & Silverman, I. (2007). Gender and sexual orientation differences in cognition across adulthood: Age is kinder to women than to men regardless of sexual orientation. Archives of Sexual Behavior, 36, 235-249.

Saturday, May 19, 2007

Can running improve your memory?

In honor of the Bay to Breakers run tomorrow, today's posting is about running and possible effects on learning. A recent issue of Neurobiology of Learning and Memory contains an article entitled "High impact running improves learning," by Winter, Breitenstein, Mooren, Voelker, Fobker, Lechtermann, Krueger, Fromme, Korsukewitz, Floel, and Knecht (phew!) and conducted at the University of Muenster.

In a nutshell, the idea was to compare the efficacy of learning new materials depending on the level of physical exertion just prior to learning. In a "relaxed" condition, the runners simply were sedentary for 15 minutes before learning. In a "moderate condition", low impact running was done, defined as running for 40 minutes at a fixed individual heart rate (designed to keep energy consumption aerobic). In an "intense" condition, high impact running was done, consisting of two sprints of increasingly fast pace "until exhaustion" (designed to trigger anaerobic energy consumption).

The task after these three conditions (counterbalanced within subjects) was an associative learning task between an object and a novel word, e.g., learning that a picture of ein auto corresponds to /glump/ in a new, artificial language. The learning task was rather implicit; a picture and a non-word would be presented, and the subjects had to decide quickly if the pairing was "correct" or not. (10 times out of 11, it was.) Immediately after, as well as one week and 8 months later, subjects' ability to identify correct pairings of the new words and their German "translations" was tested.

So, does either low- or high-impact running make it easier to learn new words? If you mine the data enough, it sure does. The comparison I was most interested in - whether memory of the words would be better after either running condition at either 1 week or 8 months post-learning - was not too compelling. There was not even a trend for a main effect of condition at either test-time. Fortunately, an "exploratory comparison" between the low- and high-impact running at the one-week test resulted in a significant different between the two (p = .03). I guess we're supposed to overlook the multiple, unmotivated statistical tests that accompany this one "significant" difference, and the fact that memory in the sedentary condition lies between the two running conditions (Figure 2B, shown below).

Interestingly, the data from the learning part of the experiment (Figure 2A) do seem at first to support the authors' claim that high impact running improves learning - the percent correct for the high impact condition climbs more steeply than the other conditions. Responses in that condition were also faster.

But wait a second, what was going on during training again? The subjects were to decide intuitively whether the picture and word were the correct match. 10 times out of 11, this was the case. So in other words, if you can just manage to hit yes on every trial, you automatically would get 90 percent correct. Maybe if you were, oh say, pumped up on epinephrine, might you answer a little more quickly and perhaps have a different response bias?

The same concern may apply to the retention data as well. The authors never tell us what the ratio of correct to incorrect pairs were for this part of the test, and as with the training phase, only report results as percent correct. Got signal detection analysis? (Evidently not.) And I won't even get started on the fact that the pseudowords and concepts to be remembered, although balanced in important variables, do not seem to have been counterbalanced across the three conditions.

Those who are interested in the physiology, and less skeptical than me about the basic learning effects, might be interested in checking out the analyses of lactate, catecholamines (dopamine, epinephrine, and norepinephrin), and brain-derived neurotrophic factor (BDNF), which were taken at baseline, post-intervention, and post-learning. The authors have interesting stories, if not entirely compelling ones, for all the effects.

Oh well, maybe running won't improve our memories for new things. It's unclear whether it's best to remember all the sights from Bay to Breakers in any case.


Winter, B., Breitenstein, C., Mooren, F. C., Voelker, K., Fobker, M., Lechtermann, A., Krueger, K., Frommer, A., Korsukewitz, C., Floel, A., & Knecht, S. (2007). High impact running improves learning. Neurobiology of Learning and Memory, 87, 597-609.

Friday, May 11, 2007

Mmm, duuu-alll-iiis-mmm.

In starting up a new blog, I've come up against a question that many of you no doubt have also struggled with. That question is, of course, how long I must wait before I can do a blog entry about zombies? Today's US release of 28 Weeks Later, sequel to 28 Days Later (2002), certainly provides an excellent excuse.

Recently I was happily reintroduced to the concept of the philosophical zombie (p-zombie to its friends) when listening to the Guardian's Science Weekly interview with Susan Blackmore, psychologist, "memeticist," and, it turns out, Hildabeast. She's recently written a book called Conversations on Consciousness, which may find its way into my book review soon. She's also the author of Consciousness: A Very Short Introduction and of course The Meme Machine.

Here's how Blackmore describes the p-zombie in the Guardian interview:
...the philosopher zombie is an old idea - been around in philosophy for a long time. It's this idea: Let's suppose that there's a... Ed sitting there, there you are, and there's another one next to you that looks exactly the same and behaves exactly the same and he's in every possible way indistinguishable from the outside. But, it's a zombie; it's all dark inside, it's not having any experiences at all! The zombic hunch is the idea that that would be possible. You could build such a thing, you know, with chips inside, or I don't know, have a strange mutation or something or another.

If you believe that such a philosopher zombie is possible, then you must believe that consciousness is some kind of added extra that's sort of been taken out of the zombie and is put into us at some point or has evolved for some purpose or has some function or whatever. On the other hand, you might argue, well that that's simply impossible, an intelligent creature that would act like you, with all of the subtleties and nuances of the way you behave would have to be conscious because consciousness is just what it means to be that kind of creature.
Now I'm really regretting not ever having heard Daniel Dennett lecture about zombies, because I'm only now faced with the fact that I might not actually believe in zombies, or at least in p-zombies. Has thinking this thought resulted in a zombie, somewhere out there, falling down dead? (Or was it dead already? What about those pregnant zombies? Isn't that impossible? But I digress...)

Fortunately, those blank-faced, ravenous-for-blood, I'm-gonna-eat-your-brains sorts of zombies are a far cry from p-zombies. After all, the zombies of 28 Days Later and 28 Weeks Later are the result of a virus using humans to spread itself around, which provides a pretty clear material difference between the brains of the conscious survivors and the unconscious (and non-p) zombies. Come to think of it, this virus may not be so different from the Lancet Fluke, which causes ants, once infected, to climb up blazes of grass so that they'll be eaten by cows or sheep, moving the parasite on to the next part of its lifecycle. This is one of Dennett's favorite metaphors for - you guessed it - memes (see how everything comes full circle?), and more specifically, religion (more on that soon).

The pictures in this post are from the Zombie Flashmob in San Francisco, a tradition (dare I say meme?) which I'm happy to report has recently spread to Memphis.

Wednesday, May 9, 2007

Prehending mirror neurons by the throat

When is the last time you used the verb prehend, as opposed to its more common cousins comprehend or apprehend? Has it been a while? I thought so.

I couldn't help wondering about this when thumbing through the May issue of the Journal of Cognitive Neuroscience. I came across the latest article from those über-neurolinguists over at the MPI for Human Cognitive and Brain Science in Leipzig. The article, "Comprehending prehending: Neural correlates of processing verbs with motor stems" by Shirley-Ann Rüschmeyer, Marcel Brass, and Angela Friederici, is the latest chapter in the craze about mirror neurons and their implications for embodied language.

This being a blog about neuroscience, I'm sure that most of you have already read if not blogged about mirror neurons yourselves, and probably have some ideas about various theories of embodied meaning that have stemmed, in part, from their discovery. In short, monkeys have neurons in the premotor cortex (and the posterior parietal lobule) that respond both when the monkeys perform particular hand movements (ones that are typically object oriented) and when they observe someone else doing the same thing. Some have suggested that this premotor area is the homologue of Broca's area in humans, and that the idea of conceptualizing perceptions as actions lays the groundwork for the evolution of communicative gesture, and eventually to spoken languages (albeit spoken language that is grounded in embodied thought). Evidently, this also leads to jumping in front of trains to save people.

Given this, neuroimagers have become interested in whether producing or perceiving language that has more to do with concrete physical actions might implicate motor areas of the brain, like the primary motor cortex or the premotor cortices anterior to it. Not only does this seem to be the case, but there may be some somatotopic correspondences between the two, i.e., when you hear lick, pick, and kick, motor representations for the face, arms, and legs appropriately perk up (Hauk, Johnsride, & Pulvermüller, 2004).

This new study in JOCN takes things in a slightly different direction. The question is first, what is the difference between perceiving motorically grounded words like greifen (to grasp) compared to more abstract words like denken (to think), and more interestingly, what is the difference between perceiving derivations of such words that have distinct meanings, like begreifen (to comprehend) and bedenken (to consider). For you true psycholinguistic geeks, it was a 2 x 2 design (motor-abstract and simple-complex) using visual presentation of single words and a lexical decision task (responding only to nonwords).

Rüschmeyer et al. find that when comparing the simple motor words like greifen to simple abstract words like denken, they observe greater activity in several areas including the primary motor and somatosensory areas in the pre- and postcentral gyri, consistent with the idea that a motor words implicate (sensori)motor representations. No significant effects were observed with the reverse contrast.

Interestingly, the analogous comparison with the morphologically complex words (begreifen words minus bedenken words) did not show differential involvement in motor areas. (They did find effects in the right middle temporal gyrus and left cerebellum, and as before, no significant effects with the reverse contrast.)

I wonder if things would have been different if the morphological derivations were more transparent. Sure, if we think about it we can see that greifen and begreifen are morphologically related, at least from a historical perspective, but would we have reason to expect that all such relationships are psychologically real in today's speaker? I certainly had not ever thought of the word comprehension as a more abstract derivation of the physical word prehension before today, and I suspect that my motor cortex didn't either.

(This may not be the best example to consider in English, since to prehend is archaic whereas greifen is not. So consider another: do you understand the word authority immediately as a derivation of author, in the same way that you understand serenity as a derivation of serene? I think there's a difference psycholinguistically, and I suspect that most of the items used in this study were more like the former example.)

Is this a diachronic versus synchronic issue? I'd be interested in what a historical linguist has to say.

This guy knows what I'm talking about.


Rüschmeyer, S.-A., Brass, M., & Friederici, A.D. (2007). Comprehending prehending: Neural correlates of processing verbs with motor stems. Journal of Cognitive Neuroscience, 19, 855-865.

Hauk, O., Johnsrude, I., & Pulvermüller, F. (2004). Somatotopic representation of action words in human motor and premotor cortex. Neuron, 41, 301-307.

Friday, May 4, 2007

A room without books...

The recent New York Times article "Are book reviewers out of print?" got me thinking about whether a regular book review could be a cornerstone of this blog. I'm thinking so. Since my current position doesn't come with any legitimate access to journals, I've got to do more than just react to the mass media. Although I'm sure there's going to be plenty of that too.

Right now I'm working my way through Daniel Dennett's Breaking the Spell and also imagine I'm going to have a thing or two to say about Daniel Levitin's This is Your Brain on Music. So, stay tuned for those.

Regarding the more general issue of what Neurozone is supposed to be about, I'm still sorting that out too. Your suggestions are welcome!

Monday, April 30, 2007

Dropping acid

In the May issue of Harper's you'll find an insightful essay by Gary Greenberg entitled "Manufacturing depression: A journey into the economy of melancholy." Greenberg, a psychotherapist and freelance writer, has previously written equally engaging pieces in Harper's, Mother Jones, Wired, The New Yorker, and McSweeney's.

"Manufacturing depression" tells the story of Greenberg's experience as an enrollee in a psychopharmacology study at Massachusetts General, in which the effectiveness of omega-3 fatty acids in treating major depression was being tested.

His story with the Mass General researchers begins with an evaluation of depression to see if he was an appropriate fit for the study. Greenberg thought that he'd meet the criteria for what the DSM calls minor depression, "figur[ing] anyone paying sufficient attention was bound to show the two symptoms out of the nine". Heavily abbreviated, those criteria are:
  1. depressed mood nearly every day
  2. markedly diminished interest in activities nearly every day
  3. significant weight loss or weight gain
  4. insomnia or hypersomnia nearly every day
  5. psychomotor agitation or retardation nearly every day
  6. fatigue or loss of energy nearly every day
  7. feelings of worthlessness or excessive guilt nearly every day
  8. diminished ability to concentrate, nearly every day
  9. recurrent thoughts of death, suicidal thoughts
To his surpise, Greenberg is deemed ineligible for the minor depression study because he meets the criteria for major depression (at least five of the symptoms above). He's whisked away into the omega-3 study, and insightful deconstruction of psychiatry ensues.

I won't spoil the rest of the essay; instead here are my two cents on one of Greenberg's relatively minor critiques of the experience. As in any pharmacological trial, the omega-3 fatty acid study is a double-blind, placebo-controlled study; he doesn't know whether or not he's getting the omega-3 or placebo, and nor do the researchers he's interacting with. The thing that surprised me was that at the end of the study, the researchers still could not "break the blind," i.e., tell him which one he was taking.

Evidently the argument for not breaking the blind on each patient's completion of the study is that otherwise, the researchers who represent the "double" part of the blind could detect a pattern across participants.

Surely, if this is the only concern, there are ways to get around this problem, such as having another researcher in the lab, or the pharmacologist who knows the assignments, notify the participant in some other way than through the researcher who administers the examinations?

In the end, Greenberg does find out which one he took, but only after mailing the rest of his pills off to the lab.

Recommended reading:

Greenberg, G. (2007, May). Manufacturing depression: A journey into the economy of melancholy. Harper's, 314, 35-46.

Listen to WBUR's On Point radio program related to the article

Spiegel, A. (2005, January). The dictionary of disorder: How one man revolutionized psychiatry. The New Yorker.

Sunday, April 29, 2007

The fog of war

The New York Times front page today features an article on the cognitive side effects of chemotherapy, sometimes known as "chemobrain." (Chemotherapy fog is no longer ignored as illusion.) The reported symptoms include short-term memory loss, difficulties with concentration and multi-tasking, and word-finding problems (anomia). The symptoms sound like those typically reported in aging populations under the syndrome of "mild cognitive impairment" or MCI.

The angle that the NY Times takes with the piece is that cancer patients have been complaining about these symptoms for years, and only recently have doctors taken the concerns seriously. On the other hand, it's clear even from reading the Times piece that objective evidence for the existence of this phenomenon is mixed. It's hard to know whether a subtle word finding or working memory problem that cancer patients experience following chemotherapy is different from what they would have experienced had they not chosen chemo. (I'm guessing that you can't very readily do a true random assignment, clinical-trial style, experiment to address a question like this.)

The mechanism through which chemotherapy would directly affect cognition is an even bigger mystery. The Mayo Clinic's website suggests that some of the chemotherapy drugs may be getting across the blood-brain barrier, which keeps a lot of substances in the blood from affecting the brain directly. Alternatively, the effects could be a more indirect effect of stress or depression caused by chemo (or of the cancer diagnosis itself) or of hormone therapy, which is part of the treatment regimen for some forms of cancer. The existing studies of the chemobrain phenomenon seem to be based primarily on breast-cancer patients, who often undergo hormone therapy in addition to chemo.

A quick Pubmed search for Tim Ahles of Sloan-Kettering, one of the investigators referenced in the Times, led me to his recently published article in Nature Reviews Cancer. In that article, he and co-author Andrew Saykin suggest that genetic risk factors predispose some individuals both to cancer and to cognitive decline. In other words, this could be a classic example of the third-variable problem; the chemotherapy may not be directly causing mild cognitive impairment, but rather both the need for chemo and the MCI might both stem from a third factor (a common genetic predisposition). This isn't just getting bogged down in semantics; if the direct relationship between chemo and so-called chemobrain existed, then avoiding chemo would result in avoiding chemobrain. This would not be the case if the relationship were mediated primarily by a third-variable. Of course, the dynamics of this are much more complicated, as Ahles and Saykin discuss, and may involve a complex web of interactions between the predispositions and the suite of more direct effects of cancer, chemotherapy, and all its associated treatments and stressers.

Recommended reading:
Ahles, T. A., & Saykin, A, J. (2007). Candidate mechanisms for chemotherapy-induced cognitive changes. Nature Reviews Cancer, 7, 192-201.

Friday, April 27, 2007

In love with simplistic brain theories

The number two most-emailed article at the New York Times today is Tuesday's article, "If you want to know if Spot loves you so, it's in his tail." The article covers a study published last month in Current Biology by A. Quaranta, M. Siniscalchi, and G. Vallortigara, of the University of Bari and the University of Trieste.

Quaranta et al. studied 30 dogs by presenting them with four kinds of stimuli under experimental settings: the dog's owner, a human stranger, a "dominant" unfamiliar dog, and a cat. They found that when the dogs saw their owners, there was a statistically significant (p < .00001) bias in the direction of tail wagging; there was more wagging to the right. Seeing a human stranger resulted in a similar bias to the right (p = .00007), but with less amplitude. With the cat, the bias persisted (p = .03) but will still less amplitude. When the strange, dominant dog - a Belgian Shepherd Malinois - appeared, the dogs were more likely to wag their tails to the left (p = .014).

The authors suggest that this result ties in to one of the many theories of emotion and hemispheric asymmetry that have been proposed: one by Richard Davidson of the University of Wisconsin which suggests that "approach" and "withdrawal" behaviors are associated with the left and right cerebral hemispheres, respectively. Given how cerebrum and body are wired up, movements on the left side of the body are associated with neural events in the right hemisphere, and vice versa.

I was intrigued by this paper and Davidson's hypothesis, and was a little bit skeptical that the approach-withdrawal theory is as widely accepted as the NY Times would have it. As Sandra Blakeslee, the author of the Times piece puts it:
Research has shown that in most animals, including birds, fish and frogs, the left brain specializes in behaviors involving what the scientists call approach and energy enrichment. In humans, that means the left brain is associated with positive feelings, like love, a sense of attachment, a feeling of safety and calm. It is also associated with physiological markers, like a slow heart rate.

At a fundamental level, the right brain specializes in behaviors involving withdrawal and energy expenditure. In humans, these behaviors, like fleeing, are associated with feelings like fear and depression. Physiological signals include a rapid heart rate and the shutdown of the digestive system.
For readers who were in school in the '80s, you might remember some of the "I'm left brained, you're right brained" pseudoscience that educators used to disseminate to unsuspecting fifth graders as solid neuroscientific fact. (Alison Gopnik, writing yesterday on Slate, compares today's obsession with mirror neurons with the asymmetry craze of yesteryear.) So let's consider this "fundamental" brain asymmetry a moment:

The approach-withdrawal asymmetry hypothesis is one of several inter-related hypotheses that have been put forth over the years concerning possible asymmetric processing of emotion in the brain. (See Demaree et al., 2005 for a review.)
  1. Right hemisphere. It has long been observed that neurological patients with damage to the right hemisphere are likely to suffer affective complaints than are patients with left-hemisphere damage. Earlier theories had suggested a stronger role for the right hemisphere in all emotional processing (which may still hold up when it comes to the perception of emotional states in others).
  2. Valence model. Another class of theories argued that "positive emotions" entailed a greater left hemisphere component, and "negative emotions" a greater right hemisphere component.
  3. Approach-withdrawal. Davidson and colleagues extended the valence model by incorporated a strong prefrontal component and emphasizing behavior (or the planning of behavior) rather than simply the emotional state.
  4. Behavioral activation and behavioral inhibition. Demaree et al. also discuss a related extension to which J. A. Gray is a major contributor. This is similar to approach-withdrawal, but with some subtle differences in prediction, e.g., the left hemisphere, and not the right, would be involved in the "active avoidance" of a negative stimulus (i.e., big Belgian shepherds).
The evidence for all of these theories is equivocal, even when considering the neuropsychological, sodium amytal, and EEG data on which they were primarily based, not to mention the later brain imaging evidence that posed more serious problems.

More generally, Davidson's approach-withdrawal asymmetry hypothesis is specific to the realm of emotion and emotion-driven behavior, and specific to particular areas of the brain, most notably regions within the prefrontal cortex and the amygdalae. It does not argue for anything so fundamental about the entire "left brain" and the "right brain." Even if it proved to be an adequate description for this specific problem space, it would still be one particular asymmetry that results from one network of brain regions coming together to solve one class of problems. "At a fundamental level," brain function and behavior represent myriad neural networks and problem spaces, each of which has its own peculiar pattern of hemispheric asymmetry.

Recommended reading:
McManus, C. (2002). Right hand left hand: The origins of asymmetry in brains, bodies, atoms, and cultures. Cambridge, MA: Harvard University Press.

For some other bloggers' takes on the story:
Frontal Cortex
John Hawks