Sunday, October 31, 2010

Getting Savage on evolutionary psychology

Now, I love sex advice columnist Dan Savage. I have been a faithful purveyor of his columns, podcasts and blogs for some time now. And sure, I don’t agree with him on every bit of advice, but he bats a solid .900 and articulately calls out many forms of B.S. But right now, I have a beef with Mr. Savage over his love affair with the new evolutionary psychology-inspired book Sex at Dawn by Christopher Ryan and Cacilda Jethá. This is an example of the all-too-common use of science-y thinking as justification for a particular belief, in this case, the use of evolutionary psychology to endorse the “naturalness” of polyamory.

My contentions are the following: 1. while I am all for the promotion of reading and scientific literacy, we need to be especially vigilant against accepting poor science that confirms what we already believe; 2. that we need to critically examine whether science can inform social policy discussions; and 3. we need to divorce the notion that the “naturalness” of an act means that the act is desirable.

Problems with evolutionary psychology
I need to point out in the spirit of full-disclosure that I have not read Sex at Dawn. However, from Mr. Savage’s multiple interviews with Dr. Ryan, it is evident that the apple of this book does not fall far from the tree of Buss and Baker.

Evolutionary psychology offers only post-hoc fits of theory to data
            In evolutionary psychology, one asks how human evolutionary history can explain aspects of current human behavior. Functionally, it amounts to doing thought experiments on questions such as “how did a cave man’s life influence the shape of the human penis”? The problem with this kind of problem statement is that you are looking at some data (in this case the shape of human penises) and looking for a model that fits this data. You can come up with many such models, because you are fitting the data after the fact, but you have no guarantee that your model is correct.

Let’s take a case in point of an issue brought up in the latest interview with Dr. Ryan on the Savage Lovecast. The question: why are human penises larger than gorilla penises when gorillas are larger than men? The given answer: because they were designed as plungers to remove the semen of rival males from the reproductive tract of a female. The larger theory behind this answer lies in the idea of sperm competition, the notion that females practice selective non-monogamy as a means of maximizing genetic quality in the offspring. The male, worried that he might be cuckolded into investing resources into offspring not genetically related to him needs adaptations to keep his partner from being impregnated by rivals. Therefore, it is to his advantage to have a “plunger penis” that will reduce the probability of pregnancy from a rival.
It’s kind of like an intellectual Rube-Goldberg machine, isn’t it? Or perhaps more fittingly, like one of Kipling’s “just-so stories”.

The “scientific data” for this claim come from this paper, which might be the most hilarious scientific study I’ve ever read (and this includes the smoking pot in the fMRI scanner study). From the abstract:

Inanimate models were used to assess the possibility that certain features of the human penis evolved to displace semen left by other males in the female reproductive tract. Displacement of artificial semen in simulated vaginas varied as a function of glans/coronal ridge morphology, semen viscosity, and depth of thrusting. Results obtained by modifying an artificial penis suggest that the coronal ridge is an important morphological feature mediating semen displacement.
Yes, kids… this is research with dildos and masturbation sleeves. Other great sound bites from the article include the “recipe” for artificial semen: 

Simulated semen was created by mixing 7 ml of water at room temperature with 7.16 g of cornstarch and stirring for 5 min. After trying different mixtures of cornstarch and water, this recipe was judged by three sexually experienced males to best approximate the viscosity and texture of human seminal fluid.

And in addressing limitations of the current paradigm:

A limitation of our attempts to model semen displacement was the greater rigidity of the prosthetic as compared to real genitals. The artificial vaginas did not expand as readily as real vaginal tissue nor did the phalluses compress, and, as a result, semen displacement was assessed on the basis of a single insertion. The effects, however, were robust and generalized across different artificial phalluses, different artificial vaginas, different types of simulated semen, and different semen viscosities.

…Sigh…. My own research seems so vanilla in comparison! But in all seriousness, extraordinary claims require extraordinary evidence, and this is not that evidence.
Evolutionary psychology does not make uniquely falsifiable claims
The hallmark of actual science is that it makes predictions that can be falsified and separated from other possible explanations. Evolutionary psychology does not do this. For example, the fact that men who have spent more time away from their partners find their partners more attractive and desirable, and ejaculate semen with higher sperm counts during copulation are taken as evidence for the sperm competition hypothesis. The argument is that as the man has not observed his partner, he is threatened by sperm competition, so it is to his advantage to copulate often and with… uh, greater virility. Although these studies control for time since last copulation, it doesn’t take much creativity to come up with alternative explanations.

Another example: the sperm competition hypothesis would predict that men would be more concerned with sexual infidelities of a partner (as this could result in cuckoldry) and women would be more concerned with emotional infidelities (as this could result in him leaving her without resources, or diverting resources into another partner). To test this prediction, David Buss conducted many surveys with many different groups asking them whether they would theoretically be more upset by a sexual or emotional infidelity. As nicely shown in David Buller’s critique of evolutionary psychology, although more men than women say that sexual infidelity is more upsetting, half of the men are still choosing emotional infidelity as more upsetting, so this model is far from complete.

Evolutionary psychology assumes that we know what psychological pressures existed for our ancestors in the Pleistocene.
We don’t

A closely related problem is that evolutionary psychology assumes that the mind evolved to the problems of the Pleistocene and then remained static for over 12,000 years. This seems implausible as large species-wide shifts have been observed in as little as 18 generations (less than 500 years for human generations).

However, many people who hate evolutionary psychology do so for irrational reasons
Evolutionary psychology is fine for intellectual masturbation, but we should strongly question its place as an actual science. However, many of its loudest critiques are based on emotional and political responses, rather than on the quality of the academic content.

Consider Megan McArdle’s critique of Sex at Dawn for The Atlantic. She writes:

“For example, like a lot of evolutionary biology critiques, this one leans heavily on bonobos (at least so far).  Here's the thing:  humans aren't like bonobos. And do you know how I know that we are not like bonobos?  Because we're not like bonobos.
(Emphasis in original). 

Although I am sure Ms. McArdle is more articulate in other matters, it is true that when our beliefs are challenged, we are quick to say that scientific inquiry into the matter in question is useless.

Evolutionary psychology stirs up a political hornet’s nest. If we believe that our minds evolved to solve problems of the Pleistocene and have remained largely unchanged, this suggests that our minds have little capacity to change. Therefore, we can do little about real social problems such as war, racism and rape.

As Steven Pinker points out, ignoble tendencies do not have to lead to ignoble behavior. In other words, what “is” is not the same as what “ought”. The confusion between these two concepts comes from a fallacy confounding what is natural with what is good. Which leads me to my last problem with Dan Savage’s promotion of this book…..

Things that are natural are not necessarily desirable
Let’s step back and assume for a moment that the science of evolutionary psychology was solid, and that Ryan’s hypothesis about the polyamorous nature of humans was true. There would still be a major problem with Dan Savage’s use of this book to endorse polyamorous relationships: just because some behavior is fundamental to the nature of human beings does not mean that it’s a desirable state for current human beings.

Let me be clear on this point - I am not saying that humans shouldn’t be polyamorous. I believe consenting adults should do whatever they like. However, I am saying that the “naturalness” of polyamory does not inform its desirability.

Savage and Ryan are implicitly stating that since polyamory occurs throughout animal species and in human evolutionary history, it is natural. OK, but so are war, conquest, exploitation and rape and we do not condone these.

Dan, you are a smart guy…. Don’t get sucked in to poor science just because it tells a compelling story that you want to believe!

Sunday soundbites

The same intervals that sound sad in music also sound sad in speech.

Only in Texas would a university put together a spreadsheet showing how much money each professor is making for, or costing the university. Times, they are a' changing in academia...  and it looks like someone with similarly jaded views turned them into a movie.

Speaking of metrics, Academic Productivity has a great piece on new alt-metrics for measuring, well, academic productivity.

And the extent to which Google tools can inform peer review can be found here.

Oliver Sacks has just released a new book on vision. Hear his interview on NPR's Fresh Air. See also the review in Nature. I can't wait to get my hands on this book!

The Neurocritic takes on some over-made claims about low female libido and the brain. See also Neuroskeptic.

A new study shows that methylphenidate (Ritalin) increases activity in an attention network while decreasing activity in the default network.

The Neuroethics and Law blog takes on the neuroscience of marijuana legalization in time for the Proposition 19 vote in California. Like most public policy issues in neuroscience, it's complicated.

What would be the impact of training 10,000 more science and math teachers on American scientific innovation? As reported in Science, no one knows.

Your "inner voice" of self control might really be a voice.

Thursday, October 28, 2010

Brain control

This week’s Nature has a cool article on the conscious control of medial temporal lobe neurons. In this work, the authors studied twelve patients with intractable epilepsy who have had electrodes implanted in their brains to monitor for seizure activity before neurosurgery. The electrodes were recording from brain areas that have been associated with high-level visual recognition, including place recognition, object recognition and navigation.

In the paper, the researchers identified neurons that selectively responded to one of four pictures of celebrities (i.e. a neuron that fires to Marilyn Monroe, but not Michael Jackson). Then, participants were given a target image (e.g. Marilyn Monroe) and then presented with images that consisted of two of the celebrities overlapped with 50% transparency, one was the target and the other was one of the three other celebrities. The neuronal activity from the four neurons of interest was recorded, and in near-real time, the transparency of the image could be adjusted to show more of the preferred image of the most active neuron. The patients were instructed to do whatever necessary to make the display turn into the target celebrity.

The patients were able to consciously control the neural firing to change the display to the target image about 70% of the time. The even cooler thing is that they were able to do this on the very first trial over half of the time! What were the subjects doing to control brain activity? Many reported that instead of engaging in mental imagery, they formed conceptual associations.

Overall, I think this paper is cool for two reasons: 1. it shows strong top-down modulation of high-level perceptual areas (and without needing massive amounts of training) and 2. using these regions and decoding algorithms might help new brain-computer interfaces for rehabilitation.

Cerf, M., Thiruvengadam, N., Mormann, F., Kraskov, A., Quiroga, R., Koch, C., & Fried, I. (2010). On-line, voluntary control of human temporal lobe neurons Nature, 467 (7319), 1104-1108 DOI: 10.1038/nature09510

Wednesday, October 27, 2010

Big data

It took 13 years to crunch through the 3 billion base pairs that make up the human genome. These data have been violating our assumptions ever since. My introductory biology textbook, published in 1996, speculates that there might be up to 100,000 genes in the genome. It turns out there are a lot less: about 20,000-30,000 by more recent estimates. The Human Genome Project sequenced only a few individuals, and combined all into one genome. However, many of the big questions we have about genetics concerns the differences between individuals.

We are starting to get answers to these questions. In today’s issue of Nature, a paper was published from the 1000 Genome Project, a massive collaborative effort from three continents that is designed to describe and explain the variance between individual’s genomes.

In this work, several types of variance were investigated as independent pilot studies. First, patterns between several mother-father-child trios were examined. Second, a group of 179 people had their whole genomes sequences. Last, more sparse sequencing was done on ~700 people from very diverse genetic backgrounds. While this paper is mostly serving as a progress report, and proof of concept, one very interesting bit is the finding that on average, any given person carries 50-100 gene variants that have been associated with higher risk of illness. This is very reminiscent of last week’s PNAS article showing that possessing such “risky” alleles does not decrease your lifespan to a statistically significant degree.

Durbin, R., Altshuler, D., Durbin, R., Abecasis, G., Bentley, D., Chakravarti, A., Clark, A., Collins, F., De La Vega, F., Donnelly, P., Egholm, M., Flicek, P., Gabriel, S., Gibbs, R., Knoppers, B., Lander, E., Lehrach, H., Mardis, E., McVean, G., Nickerson, D., Peltonen, L., Schafer, A., Sherry, S., Wang, J., Wilson, R., Gibbs, R., Deiros, D., Metzker, M., Muzny, D., Reid, J., Wheeler, D., Wang, J., Li, J., Jian, M., Li, G., Li, R., Liang, H., Tian, G., Wang, B., Wang, J., Wang, W., Yang, H., Zhang, X., Zheng, H., Lander, E., Altshuler, D., Ambrogio, L., Bloom, T., Cibulskis, K., Fennell, T., Gabriel, S., Jaffe, D., Shefler, E., Sougnez, C., Bentley, D., Gormley, N., Humphray, S., Kingsbury, Z., Koko-Gonzales, P., Stone, J., McKernan, K., Costa, G., Ichikawa, J., Lee, C., Sudbrak, R., Lehrach, H., Borodina, T., Dahl, A., Davydov, A., Marquardt, P., Mertes, F., Nietfeld, W., Rosenstiel, P., Schreiber, S., Soldatov, A., Timmermann, B., Tolzmann, M., Egholm, M., Affourtit, J., Ashworth, D., Attiya, S., Bachorski, M., Buglione, E., Burke, A., Caprio, A., Celone, C., Clark, S., Conners, D., Desany, B., Gu, L., Guccione, L., Kao, K., Kebbel, A., Knowlton, J., Labrecque, M., McDade, L., Mealmaker, C., Minderman, M., Nawrocki, A., Niazi, F., Pareja, K., Ramenani, R., Riches, D., Song, W., Turcotte, C., Wang, S., Mardis, E., Wilson, R., Dooling, D., Fulton, L., Fulton, R., Weinstock, G., Durbin, R., Burton, J., Carter, D., Churcher, C., Coffey, A., Cox, A., Palotie, A., Quail, M., Skelly, T., Stalker, J., Swerdlow, H., Turner, D., De Witte, A., Giles, S., Gibbs, R., Wheeler, D., Bainbridge, M., Challis, D., Sabo, A., Yu, F., Yu, J., Wang, J., Fang, X., Guo, X., Li, R., Li, Y., Luo, R., Tai, S., Wu, H., Zheng, H., Zheng, X., Zhou, Y., Li, G., Wang, J., Yang, H., Marth, G., Garrison, E., Huang, W., Indap, A., Kural, D., Lee, W., Fung Leong, W., Quinlan, A., Stewart, C., Stromberg, M., Ward, A., Wu, J., Lee, C., Mills, R., Shi, X., Daly, M., DePristo, M., Altshuler, D., Ball, A., Banks, E., Bloom, T., Browning, B., Cibulskis, K., Fennell, T., Garimella, K., Grossman, S., Handsaker, R., Hanna, M., Hartl, C., Jaffe, D., Kernytsky, A., Korn, J., Li, H., Maguire, J., McCarroll, S., McKenna, A., Nemesh, J., Philippakis, A., Poplin, R., Price, A., Rivas, M., Sabeti, P., Schaffner, S., Shefler, E., Shlyakhter, I., Cooper, D., Ball, E., Mort, M., Phillips, A., Stenson, P., Sebat, J., Makarov, V., Ye, K., Yoon, S., Bustamante, C., Clark, A., Boyko, A., Degenhardt, J., Gravel, S., Gutenkunst, R., Kaganovich, M., Keinan, A., Lacroute, P., Ma, X., Reynolds, A., Clarke, L., Flicek, P., Cunningham, F., Herrero, J., Keenen, S., Kulesha, E., Leinonen, R., McLaren, W., Radhakrishnan, R., Smith, R., Zalunin, V., Zheng-Bradley, X., Korbel, J., Stütz, A., Humphray, S., Bauer, M., Keira Cheetham, R., Cox, T., Eberle, M., James, T., Kahn, S., Murray, L., Chakravarti, A., Ye, K., De La Vega, F., Fu, Y., Hyland, F., Manning, J., McLaughlin, S., Peckham, H., Sakarya, O., Sun, Y., Tsung, E., Batzer, M., Konkel, M., Walker, J., Sudbrak, R., Albrecht, M., Amstislavskiy, V., Herwig, R., Parkhomchuk, D., Sherry, S., Agarwala, R., Khouri, H., Morgulis, A., Paschall, J., Phan, L., Rotmistrovsky, K., Sanders, R., Shumway, M., Xiao, C., McVean, G., Auton, A., Iqbal, Z., Lunter, G., Marchini, J., Moutsianas, L., Myers, S., Tumian, A., Desany, B., Knight, J., Winer, R., Craig, D., Beckstrom-Sternberg, S., Christoforides, A., Kurdoglu, A., Pearson, J., Sinari, S., Tembe, W., Haussler, D., Hinrichs, A., Katzman, S., Kern, A., Kuhn, R., Przeworski, M., Hernandez, R., Howie, B., Kelley, J., Cord Melton, S., Abecasis, G., Li, Y., Anderson, P., Blackwell, T., Chen, W., Cookson, W., Ding, J., Min Kang, H., Lathrop, M., Liang, L., Moffatt, M., Scheet, P., Sidore, C., Snyder, M., Zhan, X., Zöllner, S., Awadalla, P., Casals, F., Idaghdour, Y., Keebler, J., Stone, E., Zilversmit, M., Jorde, L., Xing, J., Eichler, E., Aksay, G., Alkan, C., Hajirasouliha, I., Hormozdiari, F., Kidd, J., Cenk Sahinalp, S., Sudmant, P., Mardis, E., Chen, K., Chinwalla, A., Ding, L., Koboldt, D., McLellan, M., Dooling, D., Weinstock, G., Wallis, J., Wendl, M., Zhang, Q., Durbin, R., Albers, C., Ayub, Q., Balasubramaniam, S., Barrett, J., Carter, D., Chen, Y., Conrad, D., Danecek, P., Dermitzakis, E., Hu, M., Huang, N., Hurles, M., Jin, H., Jostins, L., Keane, T., Quang Le, S., Lindsay, S., Long, Q., MacArthur, D., Montgomery, S., Parts, L., Stalker, J., Tyler-Smith, C., Walter, K., Zhang, Y., Gerstein, M., Snyder, M., Abyzov, A., Balasubramanian, S., Bjornson, R., Du, J., Grubert, F., Habegger, L., Haraksingh, R., Jee, J., Khurana, E., Lam, H., Leng, J., Jasmine Mu, X., Urban, A., Zhang, Z., Li, Y., Luo, R., Marth, G., Garrison, E., Kural, D., Quinlan, A., Stewart, C., Stromberg, M., Ward, A., Wu, J., Lee, C., Mills, R., Shi, X., McCarroll, S., Banks, E., DePristo, M., Handsaker, R., Hartl, C., Korn, J., Li, H., Nemesh, J., Sebat, J., Makarov, V., Ye, K., Yoon, S., Degenhardt, J., Kaganovich, M., Clarke, L., Smith, R., Zheng-Bradley, X., Korbel, J., Humphray, S., Keira Cheetham, R., Eberle, M., Kahn, S., Murray, L., Ye, K., De La Vega, F., Fu, Y., Peckham, H., Sun, Y., Batzer, M., Konkel, M., Walker, J., Xiao, C., Iqbal, Z., Desany, B., Blackwell, T., Snyder, M., Xing, J., Eichler, E., Aksay, G., Alkan, C., Hajirasouliha, I., Hormozdiari, F., Kidd, J., Chen, K., Chinwalla, A., Ding, L., McLellan, M., Wallis, J., Hurles, M., Conrad, D., Walter, K., Zhang, Y., Gerstein, M., Snyder, M., Abyzov, A., Du, J., Grubert, F., Haraksingh, R., Jee, J., Khurana, E., Lam, H., Leng, J., Jasmine Mu, X., Urban, A., Zhang, Z., Gibbs, R., Bainbridge, M., Challis, D., Coafra, C., Dinh, H., Kovar, C., Lee, S., Muzny, D., Nazareth, L., Reid, J., Sabo, A., Yu, F., Yu, J., Marth, G., Garrison, E., Indap, A., Fung Leong, W., Quinlan, A., Stewart, C., Ward, A., Wu, J., Cibulskis, K., Fennell, T., Gabriel, S., Garimella, K., Hartl, C., Shefler, E., Sougnez, C., Wilkinson, J., Clark, A., Gravel, S., Grubert, F., Clarke, L., Flicek, P., Smith, R., Zheng-Bradley, X., Sherry, S., Khouri, H., Paschall, J., Shumway, M., Xiao, C., McVean, G., Katzman, S., Abecasis, G., Blackwell, T., Mardis, E., Dooling, D., Fulton, L., Fulton, R., Koboldt, D., Durbin, R., Balasubramaniam, S., Coffey, A., Keane, T., MacArthur, D., Palotie, A., Scott, C., Stalker, J., Tyler-Smith, C., Gerstein, M., Balasubramanian, S., Chakravarti, A., Knoppers, B., Abecasis, G., Bustamante, C., Gharani, N., Gibbs, R., Jorde, L., Kaye, J., Kent, A., Li, T., McGuire, A., McVean, G., Ossorio, P., Rotimi, C., Su, Y., Toji, L., Tyler-Smith, C., Brooks, L., Felsenfeld, A., McEwen, J., Abdallah, A., Juenger, C., Clemm, N., Collins, F., Duncanson, A., Green, E., Guyer, M., Peterson, J., Schafer, A., Abecasis, G., Altshuler, D., Auton, A., Brooks, L., Durbin, R., Gibbs, R., Hurles, M., & McVean, G. (2010). A map of human genome variation from population-scale sequencing Nature, 467 (7319), 1061-1073 DOI: 10.1038/nature09534

Sunday, October 24, 2010

Mind reading?

I first volunteered to be a participant in an fMRI study as a wide-eyed college freshman ten years ago. I was so excited to get to see a picture of my brain, but once I was tightly packed into the scanner, a few worries entered my mind: would it turn out that I had a tumor, or was one of those people with half a brain? Would the experiment show that I’m not very smart, or vulnerable to mental illness? Would the graduate student administering the experiment know what I was thinking?

While my concerns were rather common, they were also rather unfounded. fMRI is rather good at predicting what you are thinking about in laboratory situations where you are given a very short list of things to think about. For example, an fMRI scan can predict whether you are thinking about a face or a place as the mental imagery for places and faces recruits different brain areas. And it is of note that this happens when your subjects are willing and able to think only about faces or houses for a 20 second run inside the scanner.

Earlier, I wrote a little about the analysis of fMRI data. The kind of inferences researchers make in these kinds of studies is in the form of “what area of the brain is more active for task 1 compared to task 2?” In prediction, the question becomes “given a pattern of brain activity, what was the participant seeing/hearing/doing?” Early prediction techniques relied on correlation: a voxel was predictive if its activity to a particular stimulus in one run was more highly correlated with activity to the same stimulus from another run than to activity from a different stimulus. More modern prediction studies make use of machine learning and statistical classifiers such as linear discriminant analysis, and support vector machines in particular. This approach to data analysis has been both popular and fruitful, particularly for vision research. I recommend this review for more on the state-of-the-art.

As impressed as we tend to be with both math-y and brain-y things, it is important to remember that we are still not able to predict arbitrary patterns of brain activity. When you are lying there in the scanner, it is possible to determine if you are daydreaming, but not the contents of your daydream. I have a particular concern over the over-selling of these studies in the titles of both lay and academic papers. Although the technology for fMRI decoding is advancing rapidly, I also do not see the scanner as a place where future civil liberties go to die. Getting readable data from the scanner requires your participant to be very co-operative, so its use as an interrogation device is limited. But for you paranoid types, perhaps you should consider some metal dental work… or even a hair tie!

Sunday soundbites

Reboxetine: bad drug or baddest drug?

On an utterly unrelated note, JAMA finds that fewer of its studies are funded by industry after more rigorous statistical oversight was put in place.

Alleles associated with disease don't decrease your lifespan.

Prediction of mild cognitive impairment using white matter connectivity.

Saturday, October 23, 2010

So, what makes you happy?

Research on the factors increasing or decreasing happiness has been of interest to psychologists and economists alike. Early research indicated that, contrary to our intuition, that major life events such as winning the lottery did not change our long-term ratings of our happiness. In other words, after a major life change, you will experience a temporary change in your happiness, but will return to being as happy as before the event. Such findings led to the set-point hypothesis that stated that each of us has an innate level of happiness, and that outside events, even large ones, don’t have a major influence on that set point.

Evidence for the set point hypothesis typically comes from longitudinal studies in which the same people rate their happiness each year, and report any major life events that occurred between surveys. From these data, you can correlate happiness as a function of the event by time-locking a sample population’s happiness the year an event occurs, and then seeing how it changes later. Let’s take marriage for example. Although each person in the sample gets married at a different time, the researcher can define year 0 to be the year of marriage for each person. Then, the researcher can look at this person’s happiness ratings both before and after marriage to determine the impact of marriage on happiness rankings. Below is the kind of graph that you get: planning and getting married makes people happy, and this happiness lasts for a couple of years, but after this, people go back to being however happy they were before.
Interestingly, although people hedonically adapt to marriage, they do not hedonically adapt to divorce. In other words, although marriage will not cause a permanent change in your happiness, getting a divorce will make you sadder in the long-term. Even more interesting is looking at the initial happiness ratings of people who marry and eventually divorce, and compare them to people who marry and stay married. It turns out that people who stay married were happier than their to-be-married-then-divorced friends even five years before marriage! This can be even before these people met their spouse.

Given that people do not strictly have one happiness set-point, what factors account for long-term changes in happiness? A new study in PNAS examines this question using 25 year longitudinal data from Germany. What I particularly enjoy about this study is that they focused on factors that people can control: although becoming disabled in an accident will likely lead to a long-term change in happiness, it is not something you can readily control. However, you do have control over things like your choice of romantic partner, your degree of religious involvement, your life priorities, whether you exercise, etc. Here is what they found:
-         Focusing on money can’t buy you happiness. People who rated the acquisition of material goods as very important were less happy than people focused on family or volunteerism. Women whose partners were materially focused were particularly sad.
-         You are less happy when you work more or (in particular) less than you would prefer to work.
-         Being with friends and exercising regularly can make you happy.
-         If you are a man, don’t be underweight. If you are a woman, don’t be obese.
-         Choose a partner who is not neurotic.
-         Although the study found a positive effect of religious participation, this is also correlated with altruism, family focus, and social participation, all of which independently increase happiness.

Although each of these factors had a small effect on happiness, all of them seem like good, common sense. I’ll go out for a run now.

Headey B, Muffels R, & Wagner GG (2010). Long-running German panel survey shows that personal and economic choices, not just genes, matter for happiness. Proceedings of the National Academy of Sciences of the United States of America, 107 (42), 17922-6 PMID: 20921399

Wednesday, October 20, 2010

Subtle influences on choice

Like most people, I like to think that my choices are a result of clear, rational thought. However, our decision processes are far more heuristic than we admit. Two new articles on choice bear this out:

Mantonakis and colleagues studied how the order of items presented to us affects our preferences and choices. In their experiment, several wines were presented to participants to taste and rate. Although participants were told that all wines were from the same varietal (e.g. pinot grigio), in reality, all of the samples were from the exact same wine! If preference and ratings were rational, then the average rating a wine receives by subjects should be the same regardless of whether it was tasted first or last. However, they found that the first wine tasted by participants was preferred over wines in other serial positions, a finding known as the primacy effect.

In the second paper, Krajbich and colleagues modeled decision choices made by subjects between two pieces of junk food. Krajbick brought junk-food-loving, hungry participants into the lab, and asked them to rate how desirable 70 different junk foods are to them. Then, in front of an eye tracker, they were presented with pairs of pictures of these food packages and asked to decide which they would prefer to eat after the experiment. Unsurprisingly, people are quicker to decide when the value of the two choices is very different. However, when the decision is more difficult, participants tended to choose the item they looked at more.

Both of these studies are laboratory demonstrations of things advertisers seems to have known for a while: to get you to buy their product, getting you to look at it early and often might be enough to get you to buy it.

Mantonakis, A., Rodero, P., Lesschaeve, I., & Hastie, R. (2009). Order in Choice: Effects of Serial Position on Preferences Psychological Science, 20 (11), 1309-1312 DOI: 10.1111/j.1467-9280.2009.02453.x

Krajbich I, Armel C, & Rangel A (2010). Visual fixations and the computation and comparison of value in simple choice. Nature neuroscience, 13 (10), 1292-8 PMID: 20835253

Sunday, October 17, 2010

How many published studies are actually true?

I’d like to point readers to this excellent new article in The Atlantic on meta-researcher John Ioannidis. Ioannidis is building quite the career on exposing the multiple biases in medical research. He has taken a field to task publishing papers with shy titles such as “Why most research findings are false”. He is rapidly becoming a personal hero of mine.

Ioannidis has examined and formally quantified research biases at all levels of “production”: in which questions are being asked, in the design of experiments, in the analysis of these experiments, and in the presentation and interpretation of the results. “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis in the article. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”

While I have examined some of these biases for both general research and fMRI experiments, it’s worth noting that in the context of medical research, the stakes are even higher as they affect patient care. It is also unfortunate that medical studies are, according to Ioannidis, more likely to contain bias as there are stronger financial interests vested in the results, compared to cognitive neuroscience. 

An unfortunate result of the competitive research environment is a lack of replication of scientific results. Although replication is the gold standard of a result’s truth, there is little acknowledgment, and thus little motivation for researchers to do this, except for the most bold of claims. Without replication, bias in research increases. However, even when a failure to replicate a major study is published, it often gets very little attention. A case in point is the failure to replicate the “Mozart effect”: the finding that listening to 10 minutes of a Mozart sonata significantly increased participants’ performance on a spatial reasoning test. A quick Googling of “Mozart effect” will show you several companies selling you Mozart recordings to increase your child’s IQ, despite the failure to replicate.

It is very easy to get discouraged by this, after all, science should be a science, right? Ioannidis seems less discouraged, and reminds us of the following: “Science is a noble endeavor, but it’s also a low-yield endeavor… I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”

Wednesday, October 13, 2010

The self-control meta-game

Previously, I wrote about the use of neuroscience in the courtroom as a defense for criminal actions. I asserted that these arguments hold water only insofar as they can demonstrate a clear causal connection between the brain injury and the criminal behavior, and that it was not possible for the defendant to control himself in the presence of such a brain injury.

Although I am a card-carrying pinko, I am enjoying the new book by Gene Heyman, Addiction: A Disorder of Choice. (A longer review post forthcoming once I finish the book). Heyman challenges the view that addiction is a compulsory chronic and relapsing condition. By illustrating historical, cultural and individual differences in drug reactions, he shows that drug dependence can be overcome with will (and massive amounts of effort and motivation). This brings us right back to the question we left off with last time: under what circumstances can we reasonably expect a person to demonstrate self-control?

A common model of self-control posits that exhibiting self-control is an effortful, resource-consuming process. According to this model, a person has a set amount of self-control that can be exhibited before failure and/or “recharge”. A common source of evidence for this model is the fact that exhibiting self-control appears to consume a good deal of glucose. (Of course, this is a very interesting idea for those whose self-control is being directed towards dieting!) Another measure of self-control failure are mistakes on a Stroop test.

A compelling new study examines the limitations of the resource-limitation model of self-control. A first experiment demonstrated that people who do not agree with the resource-limitation model made fewer mistakes on the Stroop test following a cognitively demanding task than did those who professed beliefs in the model. Even stronger was a second experiment where manipulation of participants’ beliefs in the model had the same effects. Of course, like many things in psychology, William James was here before us when he stated “The greatest discovery of my generation is that a human being can alter his life by altering his attitudes”.

Job V, Dweck CS, & Walton GM (2010). Ego Depletion--Is It All in Your Head?: Implicit Theories About Willpower Affect Self-Regulation. Psychological science : a journal of the American Psychological Society / APS PMID: 20876879