Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Friday, August 5, 2011

Proposed changes to IRBs

Institutional review boards (IRBs) are committees formed within universities and research organizations. Their job is to review proposed research that uses human subjects, evaluating it for ethical treatment of the human participants. It's an important job given the rather spotty history we have with ethical research (see here, here and here among others).

However, there is a wide range of activities that count as human subjects research, ranging from experimental vaccine trials to personality tests, from political opinions to tests of color vision. Currently, all of this research is broken up into two groups: "regular" human subjects research, which is subject to a full review process and "minimal risk" research, which is subject to a faster review process. Research is defined as minimal risk when it poses no more potential for physical or psychological harm than any other activity in daily life.

My research falls into the minimal risk category. My experiments have been described by several subjects as being "like the world's most boring video game". Outside of being boring, they are not physically harmful, and there is no exposure of deep psychological secrets either. No matter. Each year, researchers like me fill out extensive protocols detailing the types of experiments they propose to do, detailing all possible risks, outlining how subject confidentiality will be maintained, etc. And each participant in a study (each time s/he participates) receives a 3-4 page legal document explaining all of the risks and benefits of the research, which the subject signs to give his consent.

This does seem to be overkill for research which really doesn't pose any sort of physical or psychological threat to participants, and I applaud new efforts to modernize and streamline this process. (Read here for a great summary of the details. Researchers: you can comment until the end of September, the Department of Health and Human Services is soliciting opinions on a bunch of things).

Among the changes are moving minimal risk research from expedited review to no review, and eliminating the need for physical consent forms (a verbal "is this OK with you?" will suffice). These are both good things that would improve my life substantially. However, I believe that standardizing IRB policies across the country would do the most good.

I am currently at my 4th institution and have seen as many IRBs. Two of them have been entirely reasonable, requiring the minimal amount of paperwork and approving minimal risk research across the board. The other two, however, have been less helpful. As Tal Yarkoni points out, "IRB analysts have an incentive to be pedantic (since they rarely lose their jobs if they ask for too much detail, but could be liable if they give too much leeway and something bad happens)". However, I think it goes beyond this. In some sense, IRBs feel they are productive by showing that they have stopped or delayed some proportion of the research that crosses their desks.

I have had an IRB reject my protocol because they didn't like my margin size, didn't like my font size, and didn't like the cute cartoon I put on my recruitment posters (apparently cartoons are coercive). I've had an IRB send an electrician into the lab with a volt meter to make sure my computer monitor wouldn't electrocute anyone. My last institution did not approve an experiment that was a cornerstone of my fellowship proposal as it required data to be gathered online (this is very common in my field) and I couldn't guarantee that someone outside of my approved age range (18-50) was doing my experiment. Under the current rules, I couldn't just use my collaborator's IRB approval as all institutions need to approve a protocol. However, another of the proposed changes will require only one approval.

I'm very optimistic about these proposed changes... let's hope they happen!

Sunday, July 10, 2011

Managing scholarly reading

Reading, after a certain age, diverts the mind too much from its creative pursuits. Any man who reads too much and uses his own brain too little falls into lazy habits of thinking. —ALBERT EINSTEIN

How much literature should one read as an academic? Of course, the answer will vary by field, but even within my own field, I find little consensus as to the "right" amount of reading to do.

It is true that no one can read everything that is published, even in a single field such as cognitive science, while maintaining one's own productivity. In my Google reader, I subscribe to the RSS of 26 journals, and from these, I get an average of 37 articles per day. However, in an average day, I feel like I should pay attention to 5 of these. If I were to closely read all of these, I would run out of time to create new experiments, analyze data and write my own papers.

It turns out that in an average day, I'll read one of these papers and "tag" the other 4 as things I should read. But this strategy gets out of control quickly. In May, I went to a conference, didn't check my  reader for a couple of days and came back to over 500 journal articles, or around 35 that I felt deserved to be read. I have over 1300 items tagged "to read" in my Zotero library. At my current rate of reading, it would take me over 3.5 years to get through the backlog even if I didn't add a single article to the queue.

So, how to stay informed in an age of information overload? It seems that there are a few strategies:

1. Read for, rather than read to. In other words, read when knowledge on a particular topic is to be used in a paper or grant review, but don't read anything without a specific purpose for that information. According to proponents of this method, information obtained when reading-for-reading's-take will be lost anyway, leading to re-reading when one needs the information.

This method vastly decreases the overwhelming nature of the information, and makes info acquisition efficient. However, it is not always practical for science: if you're only reading for your own productivity, you're going to miss critical papers, and at worst, are going to be doing experiments that were already done.

2. Social "reading", augmented by abstract skimming. In this method, one does not spend time reading, but spends time going to as many talks and conferences as possible, learning about literature by using the knowledge of one's colleagues. This method seems to work best in crowded fields. The more unique your research program, the more you'll have to do your own reading. And all of this traveling is time and money consuming.

3.  Don't worry about checking through many journals, but set alerts for the specific topics. My favorite is PubCrawler, suggested by Neuroskeptic. Works well when my key words and the authors' key words coincide, but I seem to have set too many topics and I get both too many "misses" and "false alarms".

How do you keep up with literature?

Tuesday, May 17, 2011

A reverse decline effect for RSVP?

Last week I attended the Vision Sciences Society annual meeting in Florida. Good times, good science. Although I don't use this blog for talking about my own research or field, I was struck by a talk from Molly Potter that was germane to this blog. (In full disclosure, Molly was the chair of my PhD committee, a personal hero of mine, and a large influence on my thinking).

In the 1960s and 1970s, Prof. Potter sought to study the temporal limits of complex visual processing. As our eyes move multiple times per second, the visual input we receive is constantly changing. To emulate this process, she developed the technique of rapid serial visual presentation (RSVP). In this method, one presents a participant with a stream of photographs, one after another, for a very brief time (half a second or less per picture). She found that when you give a participant a target scene (either by showing the picture or describing the picture), the participant can detect the presence or absence of this picture even when the pictures are presented for a tenth of a second each! Below is an example of one of these displays. Try to find a picture of the Dalai Lama wearing a cowboy hat.

Pretty cool, huh?

In her new research, Prof. Potter was trying to determine how much faster the visual system can be pushed by presenting RSVP steams that were only 50, 33 or even 13ms per picture. Here is a graph adapted from my notes at her talk:

Even at 13 ms per image, participants were performing at about 60% correct, and by 80ms per image, they were nearly perfect.

"Huh" I thought to myself during the talk, "this is really high performance. It seems even higher than performances for longer presentation times that were in the original papers".

So back in Boston, I looked up the original findings. Here is one of the graphs from 1975:


So, participants in 1975 needed 125ms per picture to reach the same level or performance that modern participants can perform with 33ms/picture.

I've complained a bit here about the so-called "decline effect", the phenomenon of effect sizes in research declining over time. The increased performance for RSVP displays can be seen as a kind of reverse decline effect.

Why?

In 1975, the only way to present pictures at a rapid rate was through the use of a tachistoscope. Today's research is done on computer monitors. Although the temporal properties of CRT monitors are well-worked out, perhaps these two methods are not fully equivalent. On the other hand, compared to 1975, our lives are full of fast movie-cuts, video games and other rapid stimuli, and so the new generation of participants may have faster visual systems.