Showing posts with label law. Show all posts
Showing posts with label law. Show all posts

Thursday, January 12, 2012

Research Works Act - seriously?

I am not a fan of the academic publishing industry, and have written before on the need for more openness in the publishing process. My position is very simple: it is not ethical for taxpayers to be forced to buy access to scientific articles whose research was funded by the taxpayer.

I am very dismayed at the introduction of the Research Works Act, a piece of legislation designed to end the NIH Open Access policy and other future openness initiatives.

Sigh... even in academic publishing, we're socializing the risks and privatizing the gains. Here, I agree completely with Michael Eisen's statement in the New York Times:
 "But the latest effort to overturn the N.I.H.’s public access policy should dispel any remaining illusions that commercial publishers are serving the interests of the scientific community and public."

As this bill was written by representatives taking money from the publishing industry, perhaps we should include lawmakers in that group as well.

Monday, September 26, 2011

Not the new truth serum.

Magnetic Pulses to the Brain Make it Impossible to Lie: Study

Zapping the brain with magnets makes it IMPOSSIBLE to lie, claim scientists

Holy crap! Hold on to your civil liberties...get your tin foil hat.... Something really exciting must be going on in neuroscience.

Right?

So it turns out that these articles refer to the following study:

Here, participants were shown red and blue circles and asked to name the color of the circle. At will, the participant could choose to lie or tell the truth about the color of the circle. However, while they were performing this task, repeated transcranial magnetic stimulation (rTMS) was applied to one of four brain areas (right or left dorsolateral prefrontal cortex (DLPFC), or right or left parietal cortex (PC)). TMS produces a transient magnetic field that produces electrical activity in the brain. As it is causing the brain to have different firing behavior, TMS allows researchers to gain insight into how certain brain areas cause behavior. Previously, the dorsolateral prefrontal cortex has been implicated in generated lies. Here, the authors sought to assess whether this area has a causal role in deciding whether or not to tell a lie. Here, the parietal cortex served as a control area as it is not generally implicated in the generation of lies.

So, is TMS to DLPFC the new truth serum? Here, I've re-plotted their results:

When TMS was applied to the left DFPFC (compared with the left PC), participants were less likely to choose to tell the truth whereas they were slightly more likely to be truthful when stimulation was applied to the right DLPFC. As you can see from the graph, the effect, although significant, is pretty tiny. The stimulation changes your propensity to lie or truth-tell about 5% in either direction. This cannot be farther from the "impossible to lie" headlines.

Interesting? Yes. Useful for law enforcement? Probably not.

Karton I, & Bachmann T (2011). Effect of prefrontal transcranial magnetic stimulation on spontaneous truth-telling. Behavioural brain research, 225 (1), 209-14 PMID: 21807030

Wednesday, December 29, 2010

Why brain-based lie detection is not ready for "prime time"

We are in a new and interesting legal world. Although to date, no US court cases have used brain-based lie detection techniques as evidence, several cases have sought such evidence and settled out of court. fMRI is the most frequent type of brain-based lie detection technology, with two companies, Cephos and No Lie MRI providing this service in the legal domain. There have also been attempts made to use EEG for deception detection. Notably, such a technique was used in part to prosecute a young woman for murder in India in 2008.

I am far from the first to point out that this technology is highly exploratory and not accurate enough to be used in the court of law. My goal here is to outline a good number of the reasons this is the case.

9. We do not know how accurate these techniques are. Although the two aforementioned companies boast lie detection accuracy rates of 90%+, these cannot be independently verified by an independent lab as the methods used by these companies are trade secrets. For example, there are few peer-reviewed studies of the putative EEG-based marker of deception, the P300, and most come from the lab that is commercially involved with a company trying to sell the technique as a product. Interestingly, an independent lab studying the effect of countermeasures on the technique found an 82% hit rate in controls (not the 99% accuracy claimed by the company), and this was reduced to 18% when countermeasures were used!

8. In the academic literature, where we do have access to methodology, we are limited to testing typical research participants: undergraduate psychology majors (although see this). For a lie detection method to be valid, it would need to be shown as accurate in a wide variety of populations, varying in age, education, drug use, etc. This population is not likely to be skilled in deception as a career criminal might, and it has been shown that the more often one lies, the easier it is to lie. Most fMRI-based lie detection techniques are based on the assumption that lying is hard to do, and thus requires the brain to use more energy. If frequent lying makes lying easy, then it could be the case that practiced liars don't have this pattern of brain activity.
     Although a fair amount has been made lately about WEIRD subjects, participants in these studies are actually beyond WEIRD: they are almost exclusively right handed, and predominantly male.

7. Along this same line, the "lies" that are told in these studies rarely have an impact on the lives of the student participants. Occasionally, an extra reward is given if the participant is able to "trick" the system, but in the real world, with reputations and civil liberties at stake, one might imagine that one might do a better job at tricking the scanner. However, being instructed to lie about a low-stakes laboratory situation is not the same as the high-stress situations where this technology would be used in real-life. Occasionally, a study will try to ameliorate this situation by using a mock crime (such as a theft) as the deceptive stimuli. However, these are also of limited use as participants know that the situation is contrived.

6. Like traditional polygraph tests, it is possible to fool brain-based lie detection systems with countermeasures. Indeed, in an article in press at NeuroImage, Ganis and colleagues found that deliberate countermeasures on the part of their participants dropped deception detection from 100% to 30%. Most studies of fMRI lie detection have found more brain activation for lies than truth, suggesting that it is more difficult for participants to lie. However, is this still the case with well-rehearsed lies? What about subjects performing mental arithmetic during truth to fool the scanner?
    
5. A general lack of consistency in the findings in the academic literature. To date, there are ~25 published, peer-reviewed studies of deception and fMRI. Of these studies there are at least as many brain areas implicated in deception, including the anterior prefrontal area, ventromedial prefrontal area, dorsolateral prefrontal area, parahippocampal areas, anterior cingulate, left posterior cingulate, temporal and subcortical caudate, right precuneous, left cerebellum, insula, putamen, caudate, thalamus, and various regions of temporal cortex! Of course, we know better than to believe that there is some dedicated "lying region" of the brain, and given the diversity of deception tasks (everything from "lie about this playing card" to "lie about things you typically do during the day"), the diversity of regions is not surprising. However, the lack of replication is a cause for concern, particularly when we are applying science to issues of civil liberties.

4. An additional issue surrounds the fact that many of these studies are not properly balanced. In other words, participants are instructed to lie more or less often than they are instructed to tell the truth.

3. There is a large difference between group averages and finding deception within an individual. Knowing that on average, brain region X is significantly more active in a group of subjects during deception than during truth does not tell you than for subject 2 on trial 9 than deception was likely to occur due to the differences in activation. Of course, some studies are trying to study this level of analysis, but right now they are the majority.

2. Some things that we think that are not true are not necessarily lies. Most of us believe we are above-average drivers, and smarter and more attractive than most even when these beliefs are not true. Memories, even so-called "flash-bulb" memories are not fool proof.

1. Are all lies equivalent to the brain?  Are lies about knowledge of a crime the same in the brain as white lies such as "no, honey those pants don't make you look fat" or lies of omission or self-deceiving lies?

Saturday, October 9, 2010

Did my brain make me do it?


Our first case is from a 40-year old man who developed a new and intense interest in child pornography.  His sexuality also generally increased, and he found himself frequenting prostitutes even though he never had before.  He was ashamed of his behavior and went to lengths to hide it, and could communicate that it was morally wrong.  However, he then began making sexual advances on his pre-pubescent step-daughter and spoke of raping his landlady.  He was removed from his home, but failed a 12 step sexual addiction program as he could not restrain himself from soliciting sexual favors from the staff and fellow group members.  As he failed the program, he was sentenced to prison, but developed debilitating headaches and balance problems shortly before admission.  An MRI revealed a large tumor in his orbitofrontal lobe, an area associated with self control, executive function and the regulation of social behavior.  Following surgical removal of the tumor, the man was able to successfully complete the sexual addition course, successfully moved back in with his family and no longer had pedophilic or other deviant urges.

Consider, then our second case: the 1992 trial of Herbert Weinstein, a 65-year-old advertising executive who was charged with strangling his wife to death and then, in an effort to make the murder look like a suicide, throwing her body out the window of their Manhattan apartment.  At his neuropsychiatric evaluation, it was found that Mr. Weinstein had a small, subarachnoid cyst in his brain. The defense moved to use this cyst as evidence of Mr. Weinstein’s inability to control, or be responsible for, his behavior. The cyst in Weinstein’s brain has never been linked to mental illness or violent behavior. After a contentious pre-trial hearing about using this evidence, Mr. Weinstein accepted a plea bargain.

Are both of these men equally responsible for their own behavior?

A central tenet of neuroscience is that all behavior is caused by the brain. This sounds simple enough, but given our long intellectual history of separating the mind from brain, we hold very dear to the idea of an “I”, separate from the 3 pounds of electrical meat that is our brain, calling the shots. After all, “I” feel like “I” make decisions that shape my life. If “I” wasn’t responsible for these decisions, if the decisions instead came from the electrical meat, which is determined by the laws of physics, then how is it that “I” decided to wear a blue shirt instead of the red one? More troubling, if “I” am just my brain, and my brain is malfunctioning, am “I” still responsible for my behavior?

People are remarkably consistent in their moral judgments. Therefore, with some confidence, I can predict that you feel that the man from case 1 is less responsible for his behavior than Mr. Weinstein from case 2.  Is this gut-level feeling rational? After all, both men had damage to their brains, and their brains govern their behavior.

The problem with many cases of “my brain made me do it” arguments is that the association between a brain injury and a behavioral problem is not causal evidence that the injury caused the behavior. Another way of saying this is that “correlation does not imply causation”. We are quick to call B.S. on associations that don’t seem to have plausible causal connections: although drowning is associated with ice cream consumption, we do not guess that ice cream causes drowning. In the criminal realm, we are also likely to see through a Twinkie defense, even if “neuro-babble” makes for a more compelling case.

However, in the case of the first man, we are able to establish a causal connection between his brain injury and his bad behavior: he had “normal” behavior (presumably) before and after the tumor. Unfortunately, we cannot surgically repair most malfunctioning brains, so most connections between behavior and brain are speculative.

Beyond the problem of causality is the problem of will. How can we establish that someone absolutely cannot control his behavior? People can exhibit a certain degree of regulation over even autonomic functions using biofeedback or certain styles of meditation. In the lab, feedback from fMRI has been used to train subjects to willfully activate and de-activate a region involved in the perception of pain. However, it is incredibly difficult to willfully change most behaviors. It takes many days of consistent effort to form new habits. Many ex-smokers report that the physical withdrawal from nicotine was much easier to deal with than the reprogramming of one’s automatic response to grab a cigarette in various contexts. Although the politics of how we frame addiction is a larger topic for another post, suffice it to say that there is no consistent agreement what behaviors we expect people to be able to control, and those we don’t.

So, did my brain make me do it? Well, yes, of course it caused my behavior. Am I to be held responsible for this behavior? Given the above difficulties, I have to agree with Michael Gazzaniga who states that this question is to be left to the legal scholar and not the neuroscientist.