Showing posts with label fMRI. Show all posts
Showing posts with label fMRI. Show all posts

Friday, October 7, 2011

Insula-gate

For those just tuning into this week's latest installment of NeuroNonsense brought to you by the New York Times, let me being you up to date:

The New York Times allowed (nonscientist) Martin Lindstrom to once again use its Op-Ed space to "publish" non-peer reviewed "science".

Scientists, disgusted struck out at this perversion of science throughout the blogosphere (here, here, here, here though I'm sure I'm missing others). Dozens of prominent cognitive neuroscientists wrote a counter op-ed denouncing this practice (heavily edited by NYT staff).

At work the other day, a graduate student asked me why our field has a lower bar for press shenanigans and wildly implausible claims. I think there are several possible answers to this question (the fact that folk psychology seems to provide causally satisfactory explanations, the allure of pretty pictures of the brain "lighting up", or the intrinsic interest people take in their own brains all come to mind easily. However, I'm afraid that there's also a capitalist component to this one as well: Lindstrom makes his money convincing companies that his "science" will lead to better marketing outcomes. I can't think of a single case where someone impersonates a particle physicist or an inorganic chemist to sell snake oil.

The interest people take in their brains unfortunately creates this market for NeuroNonsense.

Friday, February 4, 2011

Psychology does not equal alchemy

The other night I was watching this show on PBS. It's a decent episode, showcasing some sexy new results in neuroscience, and is accessible to a general audience. But then it had to end on this note:

"Our best hope [for understanding the brain] lies within neuroscientists. What are thoughts but electrical impulses among brain cells? What are ideas but novel firings of those cells? What are mental problems if not impulses that have misfired? In the same way that chemistry grows from the ashes of alchemy, neuroscience, a field still in its infancy, may one day subsume psychology"

Deep breath in, deep breath out.

 ..OK...

This is like saying "one day physics will subsume chemistry". They are two different levels of analysis designed to answer questions at different levels of analysis. In the big picture of science, yes neuroscience will subsume psychology, but neuroscience will be subsumed by biology, chemistry, physics and then pure math. In the meantime, it's rather useful to have separate fields.

But the bigger problem is the assumption that a picture with a glowing piece of brain real-estate tells you more about how we function than does behavior. Although the pictures are hugely compelling, knowing where a process is taking place in the brain is not the same as knowing how it is taking place. Honestly, I think we're going to look back at the last 15 years of cognitive neuroscience and see them as lost years where we got distracted by brain porn. As a thought exercise, I've tried to think of any finding from fMRI that is a unique contribution to what we know about the brain that wasn't already known with cellular recording, behavior and/or patient studies. I'm having a hard time coming up with one, but please send me your examples if you have them!

Wednesday, December 29, 2010

Why brain-based lie detection is not ready for "prime time"

We are in a new and interesting legal world. Although to date, no US court cases have used brain-based lie detection techniques as evidence, several cases have sought such evidence and settled out of court. fMRI is the most frequent type of brain-based lie detection technology, with two companies, Cephos and No Lie MRI providing this service in the legal domain. There have also been attempts made to use EEG for deception detection. Notably, such a technique was used in part to prosecute a young woman for murder in India in 2008.

I am far from the first to point out that this technology is highly exploratory and not accurate enough to be used in the court of law. My goal here is to outline a good number of the reasons this is the case.

9. We do not know how accurate these techniques are. Although the two aforementioned companies boast lie detection accuracy rates of 90%+, these cannot be independently verified by an independent lab as the methods used by these companies are trade secrets. For example, there are few peer-reviewed studies of the putative EEG-based marker of deception, the P300, and most come from the lab that is commercially involved with a company trying to sell the technique as a product. Interestingly, an independent lab studying the effect of countermeasures on the technique found an 82% hit rate in controls (not the 99% accuracy claimed by the company), and this was reduced to 18% when countermeasures were used!

8. In the academic literature, where we do have access to methodology, we are limited to testing typical research participants: undergraduate psychology majors (although see this). For a lie detection method to be valid, it would need to be shown as accurate in a wide variety of populations, varying in age, education, drug use, etc. This population is not likely to be skilled in deception as a career criminal might, and it has been shown that the more often one lies, the easier it is to lie. Most fMRI-based lie detection techniques are based on the assumption that lying is hard to do, and thus requires the brain to use more energy. If frequent lying makes lying easy, then it could be the case that practiced liars don't have this pattern of brain activity.
     Although a fair amount has been made lately about WEIRD subjects, participants in these studies are actually beyond WEIRD: they are almost exclusively right handed, and predominantly male.

7. Along this same line, the "lies" that are told in these studies rarely have an impact on the lives of the student participants. Occasionally, an extra reward is given if the participant is able to "trick" the system, but in the real world, with reputations and civil liberties at stake, one might imagine that one might do a better job at tricking the scanner. However, being instructed to lie about a low-stakes laboratory situation is not the same as the high-stress situations where this technology would be used in real-life. Occasionally, a study will try to ameliorate this situation by using a mock crime (such as a theft) as the deceptive stimuli. However, these are also of limited use as participants know that the situation is contrived.

6. Like traditional polygraph tests, it is possible to fool brain-based lie detection systems with countermeasures. Indeed, in an article in press at NeuroImage, Ganis and colleagues found that deliberate countermeasures on the part of their participants dropped deception detection from 100% to 30%. Most studies of fMRI lie detection have found more brain activation for lies than truth, suggesting that it is more difficult for participants to lie. However, is this still the case with well-rehearsed lies? What about subjects performing mental arithmetic during truth to fool the scanner?
    
5. A general lack of consistency in the findings in the academic literature. To date, there are ~25 published, peer-reviewed studies of deception and fMRI. Of these studies there are at least as many brain areas implicated in deception, including the anterior prefrontal area, ventromedial prefrontal area, dorsolateral prefrontal area, parahippocampal areas, anterior cingulate, left posterior cingulate, temporal and subcortical caudate, right precuneous, left cerebellum, insula, putamen, caudate, thalamus, and various regions of temporal cortex! Of course, we know better than to believe that there is some dedicated "lying region" of the brain, and given the diversity of deception tasks (everything from "lie about this playing card" to "lie about things you typically do during the day"), the diversity of regions is not surprising. However, the lack of replication is a cause for concern, particularly when we are applying science to issues of civil liberties.

4. An additional issue surrounds the fact that many of these studies are not properly balanced. In other words, participants are instructed to lie more or less often than they are instructed to tell the truth.

3. There is a large difference between group averages and finding deception within an individual. Knowing that on average, brain region X is significantly more active in a group of subjects during deception than during truth does not tell you than for subject 2 on trial 9 than deception was likely to occur due to the differences in activation. Of course, some studies are trying to study this level of analysis, but right now they are the majority.

2. Some things that we think that are not true are not necessarily lies. Most of us believe we are above-average drivers, and smarter and more attractive than most even when these beliefs are not true. Memories, even so-called "flash-bulb" memories are not fool proof.

1. Are all lies equivalent to the brain?  Are lies about knowledge of a crime the same in the brain as white lies such as "no, honey those pants don't make you look fat" or lies of omission or self-deceiving lies?

Sunday, October 24, 2010

Mind reading?


I first volunteered to be a participant in an fMRI study as a wide-eyed college freshman ten years ago. I was so excited to get to see a picture of my brain, but once I was tightly packed into the scanner, a few worries entered my mind: would it turn out that I had a tumor, or was one of those people with half a brain? Would the experiment show that I’m not very smart, or vulnerable to mental illness? Would the graduate student administering the experiment know what I was thinking?

While my concerns were rather common, they were also rather unfounded. fMRI is rather good at predicting what you are thinking about in laboratory situations where you are given a very short list of things to think about. For example, an fMRI scan can predict whether you are thinking about a face or a place as the mental imagery for places and faces recruits different brain areas. And it is of note that this happens when your subjects are willing and able to think only about faces or houses for a 20 second run inside the scanner.

Earlier, I wrote a little about the analysis of fMRI data. The kind of inferences researchers make in these kinds of studies is in the form of “what area of the brain is more active for task 1 compared to task 2?” In prediction, the question becomes “given a pattern of brain activity, what was the participant seeing/hearing/doing?” Early prediction techniques relied on correlation: a voxel was predictive if its activity to a particular stimulus in one run was more highly correlated with activity to the same stimulus from another run than to activity from a different stimulus. More modern prediction studies make use of machine learning and statistical classifiers such as linear discriminant analysis, and support vector machines in particular. This approach to data analysis has been both popular and fruitful, particularly for vision research. I recommend this review for more on the state-of-the-art.

As impressed as we tend to be with both math-y and brain-y things, it is important to remember that we are still not able to predict arbitrary patterns of brain activity. When you are lying there in the scanner, it is possible to determine if you are daydreaming, but not the contents of your daydream. I have a particular concern over the over-selling of these studies in the titles of both lay and academic papers. Although the technology for fMRI decoding is advancing rapidly, I also do not see the scanner as a place where future civil liberties go to die. Getting readable data from the scanner requires your participant to be very co-operative, so its use as an interrogation device is limited. But for you paranoid types, perhaps you should consider some metal dental work… or even a hair tie!



Wednesday, September 29, 2010

Where does bad fMRI science writing come from?


A perennial favorite topic on science blogs is the examination of badly designed, or badly interpreted fMRI data. However, little time is spent on why there is so much of this material to blog about! Here, I’m listing a few reasons why mistakes in experimental design and study interpretation are so common.

Problems in experimental design and analysis
These are problems in the design, execution and analysis of fMRI papers.

Reason 1: Statistical reasoning is not intuitive
Already, I have mentioned non-independence error in the context of clinical trial design. In fMRI, researchers often ask questions about activity in certain brain areas. Sometimes these areas are anatomically defined (such as early visual areas), but more often they are functionally defined, meaning they are areas that cannot be distinguished from surrounding areas by the physical appearance of the structure, but are rather defined by responding more to one type of stimulation than another. One of the most famous functionally defined areas is the Fusiform Face Area (FFA), which responds more to faces than objects. Non-independence error often comes from these functionally defined areas. It is completely kosher to run a localizer scan containing examples of stimuli known to drive your area and stimuli known to contrast with it (faces and objects, in the case of the FFA), and then run a separate experimental block containing whatever experimental stimuli you want. Then, when you analyze your data, you test your experimental hypothesis using the voxels (“volumetric pixels”) defined by your localizer scan. What is not acceptable is to run one long block that defines your region of interest in the context of the experimental block. 

A separate, but frequent error in fMRI data analysis is the failure to correct for multiple comparisons. There are hundreds of thousands of voxels in the brain, so it is probable that high activation in any particular voxel could be due to random chance. Making this point in a memorable way was Craig Bennett and colleagues who found a 3-voxel sized area in the brain of a dead salmon that responded to photographs of emotional situations. Of course, the deceased fish was not thinking about highly complex human emotions, the area was due to chance.

Now, it is all too easy to read about these problems and feel very smug about the retrospectively obvious. But it’s not that these researchers are misleading or dumb. But the non-independence problem stated another way is “we found voxels in the brain that responded to X, and then correlated this activation with Y”. Part of the controversy surrounding “voodoo correlations” surrounds the fact that, intuitively, there doesn’t seem to be much difference between correct and incorrect data analysis. Another important factor affecting the persistence of incorrect analysis is the fact that statistical reasoning is not intuitive, and that our intuitions have systematic biases.

Reason 2: It is both too easy and too hard to analyze fMRI data
There are many steps to fMRI data analysis, and there are several software packages available to do this, both free and non-free. Data fresh out of the scanner need to be pre-processed before any data analysis takes place. This pre-processing takes out small movements made by subjects, smoothes the data to take out noise, and often warps each individual’s brain to a standard brain. For brevity, I will refer the reader to this excellent synopsis of fMRI data analysis. The problem is that it is altogether too easy to “go through the motions” of data analysis without understanding how decisions made about various parameters affect the result. And although there is wide consensus about the statistical parameters used by analysis packages, this paper shows that differences in statistical decisions made by software developers have big effects in the overall results of the study. It is, in other words, too easy to go through analysis motions that are too hard to understand.

Problems in stretching the conclusions of studies
In contrast, these are problems in the translation of a scientific study to the general public.

Reason 3: Academics are under pressure to publish sexy work
As I discussed earlier, academia is a very competitive, and it is widely believed that fMRI publications have higher impact in hiring and tenure decisions than do behavioral studies. (Note, I have not found evidence of this, but it seems like someone should have computed it). Sexy fMRI work makes great sound-bites for university donors. (See “neuro-babble and brain-porn”). Here, slight exaggerations of the conclusions may be formed (and noisy peer review does not catch it).

Reason 4: Journals compete with one another for the sexiest papers
Science and Nature each have manuscript acceptance rates below 10%. If we assume that more than 10% of all papers submitted to these journals have sufficient quality to be accepted, then it is likely that some other selection criteria is being applied during the editorial process, such as novelty. It is also of note that these journals have word limits of < 2000 words, making it impossible to fully describe experimental techniques. Collectively, these situations make it possible for charismatically expressed papers with dubious methods to be accepted.

Reason 5: Pressure on the press to over-state scientific findings
Even for well-designed, well-analyzed and well-written papers, things can get tricky in the translation to the press. Part of the problem is the fact that many scientists are not completely effective communicators. But equally problematic is the pressure placed on journalists to express every study as a revolution or break-through. The truth is that almost no published papers will turn out to be revolutionary in the fullness of time; science works very slowly. Journalists perceive that their audience would rather hear about the newly discovered “neural hate circuit” or the “100% accurate Alzheimer’s disease test” than the moderate-strength statistical association found between a particular brain area and a behavioral measure.

Reason 6: Brain porn and Neuro-babble
(I would briefly like to thank Chris Chabris and Dan Simons for putting these terms into the lexicon).  “Brain porn” refers to the colored-blob-on-a-brain style photographs that are nearly ubiquitous in popular science writing. “Neuro-babble” is often a consequence of brain porn: when viewing such a science-y picture, one’s threshold for accepting crap explanations is dramatically lowered. There have been two laboratory demonstrations of this general effect. In one study, a bad scientific explanation was either presented to participants by itself, or with one of two graphics: a bar graph or a blobby brain scan image. Participants who viewed the brain, but not the bar graph or no image were more likely to say that the explanation made sense. In the other, a bad scientific explanation was given to subjects, either alone or with the preface “brain scans show that….”. Non-scientist participants as well as neuroscience graduate students were more likely to rank the prefaced bad explanations as better, even though the logic was equally un-compelling in both cases. These should serve as cautionary tales for the thinking public.

Reason 7: Truthiness
When we are presented with what we want to believe, it is much harder to look at the world with the same skeptical glasses.