I am thrilled to announce that starting in the fall, I will be working as an assistant professor for the Minerva Schools! I want to resurrect this blog with a post about my decision, and what it means for my career as well as for higher education more broadly.
What is Minerva? Good question. While they don’t yet have the name recognition of some of our older institutions, they aim to be "the first elite American university to be launched in a century". Starting from the question of “what does it mean to be an educated person in our time?” they designed a new type of university, stripped of unnecessary sports teams or facilities. In fact the only physical buildings are rented dormitory buildings in San Francisco and other world cities. Classes are held online, but unlike the MOOC model of video lectures, the pedagogical philosophy is geared towards fully active learning. The teaching platform allows instructors to rapidly give polls and quizzes, and to create small groups with a touch of a button. The students live together and move to a different international city each year.
Why I’m excited about this:
If you’ve read this blog before, you will know that I have been uncomfortable with a number of aspects of a Typical Academic Career. I worry about the consequences of grade inflation, and of how to maintain academic rigor without getting crushed in teaching evaluations. I think about what I call “the ever-accelerating hamster wheel” problem: our research impact is often measured in terms of number of publications, and we are putting out more work than can be read, forcing us to aggressively market our own research just to be heard through the noise. In my field, about 1/3 of papers are never cited! And we hire far more graduate students and postdocs to do this ever increasing amount of work while the number of academic positions for them is dwindling through the adjunctification of the professorship. Funding levels are so low that economists have questions whether spending time on grant proposals is even worth it.
So, is it better to change an institution from within or to blaze a different path? This is the question I have been wrestling with over the course of this job season. Academia (though not academics, generally) is conservative, and its wheels turn slowly. It’s deeply hierarchical, and can lull people with the sweet siren song of the status quo. That said, it’s what I’ve been working single-mindedly towards for the last 15 years. The argument for being at a traditional university can be best summed up by one of my mentors, who argues that while R1 life is not perfect, it's the best of the available alternatives. But what if it can be made better?
In the end, the question I kept asking myself is "what do I want out of an academic career? What makes for a good academic life?" For a long time, I've been uncomfortable with institutions that do not value excellence in teaching. Any one professor's research program, no matter how high profile, is still a small slice in the big pie of human knowledge, while the impact that one can have in the life of a student through teaching and mentorship can last a lifetime. But what about research? Am I shutting myself out from this world? I don't think so. I am testing the bold hypothesis that I can do great research outside of the normal paradigm.
Paradoxically, I think that my research might have more impact when freed from the pressures of "bean counting". The academics whose work I read most are not the ones publishing the most papers, but the ones publishing papers with the most depth of thought. I hope to maintain and develop a number of collaborations, tapping into the best minds and free from the need to be in any one location. And I am working to have a home base where I can mentor the research projects of my Minerva students, sparking the same passion for research in them as I had as an undergraduate.
Stay tuned for the future, I think it's going to be bright.
Showing posts with label academia. Show all posts
Showing posts with label academia. Show all posts
Tuesday, April 28, 2015
Thursday, January 12, 2012
Research Works Act - seriously?
I am not a fan of the academic publishing industry, and have written before on the need for more openness in the publishing process. My position is very simple: it is not ethical for taxpayers to be forced to buy access to scientific articles whose research was funded by the taxpayer.
I am very dismayed at the introduction of the Research Works Act, a piece of legislation designed to end the NIH Open Access policy and other future openness initiatives.
Sigh... even in academic publishing, we're socializing the risks and privatizing the gains. Here, I agree completely with Michael Eisen's statement in the New York Times:
I am very dismayed at the introduction of the Research Works Act, a piece of legislation designed to end the NIH Open Access policy and other future openness initiatives.
Sigh... even in academic publishing, we're socializing the risks and privatizing the gains. Here, I agree completely with Michael Eisen's statement in the New York Times:
"But the latest effort to overturn the N.I.H.’s public access policy should dispel any remaining illusions that commercial publishers are serving the interests of the scientific community and public."
As this bill was written by representatives taking money from the publishing industry, perhaps we should include lawmakers in that group as well.
Friday, December 16, 2011
Who takes the responsibility for quality higher education?
This gives me chills: a professor denied tenure for using the Socratic method of teaching. Of course, there are two sides to every story and this article is rather one sided - I have been in classes where so-called Socratic methods are thinly veiled excuses for hurling insults at students - but if we are to take the article at face value, this is another story in a disturbing educational trend.
The Socratic method is challenging for students and requires preparation and engagement with the material. It requires being able to effectively communicate under pressure. However, I feel that learning involves a certain amount of discomfort. Learning means pushing past the boundaries of what we already know, and what we can already do. Most undergraduate courses I took were of lecture-style, teaching students to expect to be a passive audience in class. It's a much easier route and the student can hide lack of preparation, misunderstanding or having a bad day. However, these students cannot hide forever, and this under-preparation often comes back to haunt them at exam time.
As a TA in graduate school, I saw many freshmen having harsh wake-up calls when the first midterms came back. The typical story was "But I came to all the classes, and I read the book chapters twice! How could I have gotten a C on the exam???" The unfortunate answer is that the student mistakes being able to parrot back a section of textbook or lecture for understanding the material. When an exam forces the student to use this information in an analytic or synthetic way, the facade of learning crumbles.
I don't know any instructor who wants to give a student a poor grade, but the integrity of the educational system depends on accurate assessment of mastery. If an instructor is fired, demoted or denied tenure due to the rigors of his/her course, this could spell the end of education. Sadly, this story is reminiscent of this case: a professor denied tenure for not passing enough students. I highly recommend reading this page because, if we are to take the author at his words, he took every reasonable action to enable his students to succeed.
Who is responsible for student success in higher education? Professors, of course need to be responsible for presenting learning opportunities to students in a clear manner, and to be available for advice and guidance at office hours. However, university students are adults and need to take responsibility for the ultimate learning outcomes. I am concerned by a culture of entitlement that has conditioned students to expect top marks for simply showing up. The expectations of the "self-esteem generation" and the incentives of professors to earn high student evaluations both play a role, I suspect.
I wonder sometimes whether the cost of attendance at American colleges and universities partially drives this phenomenon. Paying for education turns students and their families into customers, and "customers are always right". Perhaps subsidizing higher education would create a culture that divorces education from "service", leading to more honest evaluations and better learning.
Friday, October 14, 2011
Sunday, October 9, 2011
Is the academic publishing industry evil?
Like most people, I didn't think much about the profit model for academic journals until I was publishing in them. Even after going through the process a few times, I am still struck by a feeling that academic journals are the toll trolls on the road of knowledge dissemination.
While a non-academic journal such as The Atlantic or the New Yorker pays its authors for content, academic journals get massive amounts of content volunteered to them. While non-academic journals pay an editor to hone and perfect the content, academic journals have volunteer peer reviewers and volunteer action editors doing this work for the cost of a line on the academic CV. Both types of journals offset some publication costs with advertising, but while non-academic journals sell for ~$5 per issue and under $50 for a year's subscription, an academic journal will charge $30-40 per article and thousands for a subscription. This means that the tax payer who funds this research is not able to afford to read the research.
Let's say you're an author, and you're submitting your article to a scientific journal. It gets reviewed and edited, and is accepted for publication by the action editor. Great! Your excitement gets diminished somewhat from two documents that get sent to you: one that signs over your copyright to the journal, and a publishing bill based on the number of pages and color figures in your work (often a few hundred dollars). Now, if you want to use a figure from this article again (say, for your doctoral dissertation), you must write the journal to get permission to use your own figure. Seriously. Other points against academic journals can be found in this entertainingly inflammatory piece.
But what about open access journals? Good question. These journals exist online, and anyone can read them, which is great for small libraries struggling to afford journal costs and citizens wishing to check claims at the source. They're not so great for the academic, who gets slapped with a $1000-2000 fee for publishing in them. As inexpensive as online infrastructure is these days, I would love for someone to explain to me how it costs the journal so much just to host a paper.
I was excited to read this interview with academic publishers Wiley and Elsevier on these issues. However, I find most of the responses to be non-answer run-arounds. A telling exception to this is in the first question "what is your position on Open Access databases?". Wiley responded:
While a non-academic journal such as The Atlantic or the New Yorker pays its authors for content, academic journals get massive amounts of content volunteered to them. While non-academic journals pay an editor to hone and perfect the content, academic journals have volunteer peer reviewers and volunteer action editors doing this work for the cost of a line on the academic CV. Both types of journals offset some publication costs with advertising, but while non-academic journals sell for ~$5 per issue and under $50 for a year's subscription, an academic journal will charge $30-40 per article and thousands for a subscription. This means that the tax payer who funds this research is not able to afford to read the research.
Let's say you're an author, and you're submitting your article to a scientific journal. It gets reviewed and edited, and is accepted for publication by the action editor. Great! Your excitement gets diminished somewhat from two documents that get sent to you: one that signs over your copyright to the journal, and a publishing bill based on the number of pages and color figures in your work (often a few hundred dollars). Now, if you want to use a figure from this article again (say, for your doctoral dissertation), you must write the journal to get permission to use your own figure. Seriously. Other points against academic journals can be found in this entertainingly inflammatory piece.
But what about open access journals? Good question. These journals exist online, and anyone can read them, which is great for small libraries struggling to afford journal costs and citizens wishing to check claims at the source. They're not so great for the academic, who gets slapped with a $1000-2000 fee for publishing in them. As inexpensive as online infrastructure is these days, I would love for someone to explain to me how it costs the journal so much just to host a paper.
I was excited to read this interview with academic publishers Wiley and Elsevier on these issues. However, I find most of the responses to be non-answer run-arounds. A telling exception to this is in the first question "what is your position on Open Access databases?". Wiley responded:
"The decision to submit a manuscript for publication in a peer-review journal reflects the researcher’s desire to obtain credentialing for the work described. The publishing process, from peer review through distribution and enabling discovery, adds value, which is manifest in the final version of the article and formally validates the research and the researcher."
(Emphasis mine).
In other words, we do this because there is a demand for our journal as a brand. You, researcher are creating the demand. However, I do hold out hope that as more publishing moves online, more researchers and librarians realize that there are both diamonds and rough in all journals, and this will wear away at brand prestige, allowing the illusion of "publisher added value" to wear away.
Monday, August 8, 2011
A solution to grade inflation?
In the somewhat limited teaching experience I've had, I have found grading to be particularly difficult. The grade a student receives in my class can determine whether he'll get or keep scholarships and will play a role in determining what kinds of opportunities he'll have after my class. This is a huge responsibility. As a psychophysicist, I worry about my grade-regrade reliability (will I grade the same paper the same way twice), order effects in my grading (if I read a particularly good paper, do all papers after it seem to not measure up?), and whether personal bias is affecting my scoring (Sally is always attentive and asks good questions in class, while Jane, if present, is pugnacious and disruptive).
Of course, the easiest thing is to give everyone generally good grades. The students won't argue that they don't deserve them, and in fact, there is evidence that they'll evaluate me better for it in the end.
And while many institutions have (implicitly or explicitly) adopted this strategy, the problem with grade inflation is that it hurts students who are performing at the top level, and removes accountability from our educational system. So, what do we do about grading?
The Chronicle of Higher Education has an interesting article showing two possible solutions. The second solution involves AI-based grading, which sounds intriguing. Unfortunately, no details were provided for how (or how well) it works, so I remain skeptical. However, the first proposed solution merits some discussion: outsource grading to adjunct professors who are independent of the course, professor and students. The article follows an online university that has enacted this strategy.
Pros of this idea:
- As the grader is not attached to either the professor or the student, bias based on personal feelings towards a student can be eliminated.
- In this instantiation, graders are required to submit detailed justifications for their grades, are provided extensive training and are periodically calibrated for consistency. This can provide far more objective grading than what we do in the traditional classroom.
However, the idea is not perfect. Here are some cons that I see:
- The graders' grades get translated into pass or fail. A pass/fail system does not encourage excellence, original thinking, or going beyond the material given.
- Much of traditional grading is based on improvement and growth over a semester, and this is necessarily absent in this system. Honestly, I only passed the second semester of introductory chemistry in college (after failing the first test) because the professor made an agreement with me that if I improved on subsequent tests, she would drop the first grade.
- Similarly, the relationship between professor and student is made personal through individualized feedback on assignments. Outsourcing grading means that there cannot be a deep, intellectual relationship between parties, which I believe is essential to learning and personal growth.
While not perfect, this is an interesting idea. What are your ideas for improving on it (or grading in general)?
Of course, the easiest thing is to give everyone generally good grades. The students won't argue that they don't deserve them, and in fact, there is evidence that they'll evaluate me better for it in the end.
And while many institutions have (implicitly or explicitly) adopted this strategy, the problem with grade inflation is that it hurts students who are performing at the top level, and removes accountability from our educational system. So, what do we do about grading?
The Chronicle of Higher Education has an interesting article showing two possible solutions. The second solution involves AI-based grading, which sounds intriguing. Unfortunately, no details were provided for how (or how well) it works, so I remain skeptical. However, the first proposed solution merits some discussion: outsource grading to adjunct professors who are independent of the course, professor and students. The article follows an online university that has enacted this strategy.
Pros of this idea:
- As the grader is not attached to either the professor or the student, bias based on personal feelings towards a student can be eliminated.
- In this instantiation, graders are required to submit detailed justifications for their grades, are provided extensive training and are periodically calibrated for consistency. This can provide far more objective grading than what we do in the traditional classroom.
However, the idea is not perfect. Here are some cons that I see:
- The graders' grades get translated into pass or fail. A pass/fail system does not encourage excellence, original thinking, or going beyond the material given.
- Much of traditional grading is based on improvement and growth over a semester, and this is necessarily absent in this system. Honestly, I only passed the second semester of introductory chemistry in college (after failing the first test) because the professor made an agreement with me that if I improved on subsequent tests, she would drop the first grade.
- Similarly, the relationship between professor and student is made personal through individualized feedback on assignments. Outsourcing grading means that there cannot be a deep, intellectual relationship between parties, which I believe is essential to learning and personal growth.
While not perfect, this is an interesting idea. What are your ideas for improving on it (or grading in general)?
Sunday, July 10, 2011
Managing scholarly reading
Reading, after a certain age, diverts the mind too much from its creative pursuits. Any man who reads too much and uses his own brain too little falls into lazy habits of thinking. —ALBERT EINSTEIN
How much literature should one read as an academic? Of course, the answer will vary by field, but even within my own field, I find little consensus as to the "right" amount of reading to do.
It is true that no one can read everything that is published, even in a single field such as cognitive science, while maintaining one's own productivity. In my Google reader, I subscribe to the RSS of 26 journals, and from these, I get an average of 37 articles per day. However, in an average day, I feel like I should pay attention to 5 of these. If I were to closely read all of these, I would run out of time to create new experiments, analyze data and write my own papers.
It turns out that in an average day, I'll read one of these papers and "tag" the other 4 as things I should read. But this strategy gets out of control quickly. In May, I went to a conference, didn't check my reader for a couple of days and came back to over 500 journal articles, or around 35 that I felt deserved to be read. I have over 1300 items tagged "to read" in my Zotero library. At my current rate of reading, it would take me over 3.5 years to get through the backlog even if I didn't add a single article to the queue.
So, how to stay informed in an age of information overload? It seems that there are a few strategies:
1. Read for, rather than read to. In other words, read when knowledge on a particular topic is to be used in a paper or grant review, but don't read anything without a specific purpose for that information. According to proponents of this method, information obtained when reading-for-reading's-take will be lost anyway, leading to re-reading when one needs the information.
This method vastly decreases the overwhelming nature of the information, and makes info acquisition efficient. However, it is not always practical for science: if you're only reading for your own productivity, you're going to miss critical papers, and at worst, are going to be doing experiments that were already done.
2. Social "reading", augmented by abstract skimming. In this method, one does not spend time reading, but spends time going to as many talks and conferences as possible, learning about literature by using the knowledge of one's colleagues. This method seems to work best in crowded fields. The more unique your research program, the more you'll have to do your own reading. And all of this traveling is time and money consuming.
3. Don't worry about checking through many journals, but set alerts for the specific topics. My favorite is PubCrawler, suggested by Neuroskeptic. Works well when my key words and the authors' key words coincide, but I seem to have set too many topics and I get both too many "misses" and "false alarms".
How do you keep up with literature?
How much literature should one read as an academic? Of course, the answer will vary by field, but even within my own field, I find little consensus as to the "right" amount of reading to do.
It is true that no one can read everything that is published, even in a single field such as cognitive science, while maintaining one's own productivity. In my Google reader, I subscribe to the RSS of 26 journals, and from these, I get an average of 37 articles per day. However, in an average day, I feel like I should pay attention to 5 of these. If I were to closely read all of these, I would run out of time to create new experiments, analyze data and write my own papers.
It turns out that in an average day, I'll read one of these papers and "tag" the other 4 as things I should read. But this strategy gets out of control quickly. In May, I went to a conference, didn't check my reader for a couple of days and came back to over 500 journal articles, or around 35 that I felt deserved to be read. I have over 1300 items tagged "to read" in my Zotero library. At my current rate of reading, it would take me over 3.5 years to get through the backlog even if I didn't add a single article to the queue.
So, how to stay informed in an age of information overload? It seems that there are a few strategies:
1. Read for, rather than read to. In other words, read when knowledge on a particular topic is to be used in a paper or grant review, but don't read anything without a specific purpose for that information. According to proponents of this method, information obtained when reading-for-reading's-take will be lost anyway, leading to re-reading when one needs the information.
This method vastly decreases the overwhelming nature of the information, and makes info acquisition efficient. However, it is not always practical for science: if you're only reading for your own productivity, you're going to miss critical papers, and at worst, are going to be doing experiments that were already done.
2. Social "reading", augmented by abstract skimming. In this method, one does not spend time reading, but spends time going to as many talks and conferences as possible, learning about literature by using the knowledge of one's colleagues. This method seems to work best in crowded fields. The more unique your research program, the more you'll have to do your own reading. And all of this traveling is time and money consuming.
3. Don't worry about checking through many journals, but set alerts for the specific topics. My favorite is PubCrawler, suggested by Neuroskeptic. Works well when my key words and the authors' key words coincide, but I seem to have set too many topics and I get both too many "misses" and "false alarms".
How do you keep up with literature?
Saturday, July 9, 2011
Bitter academic roundup
So, you think you want to go to graduate school? You might want to consider the following:
This infographic nicely details many of the perils of the PhD and post-PhD process.
Here's what an honest graduate school ad might look like.
This infographic nicely details many of the perils of the PhD and post-PhD process.
Here's what an honest graduate school ad might look like.
Sunday, June 26, 2011
Is college worth it for everyone?
In yesterday's New York Times, David Leonhardt opined that we ought to send as many young adults to college as possible. His economic arguments ran as follows:
- The income delta between college grads and non-college grads has increased from 40% to over 80% in the last three decades.
- If one calculates a return on investment for a college education, it is 15%, higher than stocks, and certainly higher than current real-estate.
Unfortunately, he completely glosses over the problem of cost. He writes:
- The income delta between college grads and non-college grads has increased from 40% to over 80% in the last three decades.
- If one calculates a return on investment for a college education, it is 15%, higher than stocks, and certainly higher than current real-estate.
Unfortunately, he completely glosses over the problem of cost. He writes:
"First, many colleges are not very expensive, once financial aid is taken into account. Average net tuition and fees at public four-year colleges this past year were only about $2,000 (though Congress may soon cut federal financial aid)."
As if the eminent cutting of federal financial aid can be reduced to a parenthetical! The reality is that college prices have increased over 130% since 1988 while median family incomes have remained stagnant. This situation makes college possible only through the amassing of large amounts of student debt. Indeed, for the first time in this country, student loan debt has surpassed credit card debt. Taking on this kind of debt in this lackluster economy is problematic. Furthermore unlike mortgages, student loan debt does not go away with bankruptcy, loading some thinkers to forecast education as the next bubble.
Leonhardt also unhelpfully compares the arguments against universal college education to the arguments against universal high school education from over half a century ago. This would be fine if we were in the position to make four years of university education part of public education. However, calling for all families to take on this debt seems irresponsible and elitist.
Sunday, May 15, 2011
Growing PhDs "like mushrooms"
If you have been following this blog, it comes as no surprise that I frequently worry about the state of the university system. I believe there are structural problems in the system that are a disservice to students (both at the undergraduate and graduate levels) as well as staff (particularly adjuncts and non-tenure track faculty, but also to junior tenure-track professors as well).
Recently, Nature published a series of opinion articles on the over-production of PhDs in the sciences. We are producing too many people who are apprenticed in a career path that can accommodate only a fraction of them.
As a result, we are spending longer in graduate school and in our postdocs, but the number of people passing through the needle eye to professorship is shrinking as tenure-track jobs get replaced with temporary and adjunct positions. In 1973, 55% of US biology PhDs secured tenure-track positions within six years of completing their degrees, and only 2% were in a postdoc or other untenured academic position. By 2006, only 15% were in tenured positions six years after graduating, with 18% un-tenured. This largely fits with my perception: it has been seven years since I began graduate school, and considering my incoming class, we are evenly spread across remaining in school, having a post-doc and getting a job in industry. Not one of us currently has a tenure-track faculty position. Something must be very broken in the system for prospects to be this bleak for graduates of a top-five department.
So why doesn't the market change such that supply meets demand? Essentially, it's that the system runs on cheap graduate and postdoctoral labor. "Yet many academics are reluctant to rock the boat as long as they are rewarded with grants (which pay for cheap PhD students) and publications (produced by their cheap PhD students). So are universities, which often receive government subsidies to fill their PhD spots." In fact, faculty members who are reluctant to perpetuate this cycle are punished in grant review, writing in costs for a research scientist at $80,000 per year when others have the same work done by a postdoc at $40,000 per year.
So, how did we get here? Part of the issue has to be that more people are going to college than ever before and the university system does not properly scale to the demand. In the US in 1970, only 11% of people over the age of 25 had a bachelor's degree, but this number had climbed to 28% by 2009. So more graduate students, postdocs and adjuncts are being used to teach the courses to accommodate all of these new students. While some claim that it is just too expensive to have tenure-track faculty teaching all of these courses, one must also consider the recent trend towards massive salaries for university professors.
Actually, if anyone could explain university economics to me, I'd be grateful.
And where do we go from here? Personally, I love the suggestions made by William Deresiewicz in this fantastic article. Particularly, "The answer is to hire more professors: real ones, not academic lettuce-pickers."
Recently, Nature published a series of opinion articles on the over-production of PhDs in the sciences. We are producing too many people who are apprenticed in a career path that can accommodate only a fraction of them.
As a result, we are spending longer in graduate school and in our postdocs, but the number of people passing through the needle eye to professorship is shrinking as tenure-track jobs get replaced with temporary and adjunct positions. In 1973, 55% of US biology PhDs secured tenure-track positions within six years of completing their degrees, and only 2% were in a postdoc or other untenured academic position. By 2006, only 15% were in tenured positions six years after graduating, with 18% un-tenured. This largely fits with my perception: it has been seven years since I began graduate school, and considering my incoming class, we are evenly spread across remaining in school, having a post-doc and getting a job in industry. Not one of us currently has a tenure-track faculty position. Something must be very broken in the system for prospects to be this bleak for graduates of a top-five department.
So why doesn't the market change such that supply meets demand? Essentially, it's that the system runs on cheap graduate and postdoctoral labor. "Yet many academics are reluctant to rock the boat as long as they are rewarded with grants (which pay for cheap PhD students) and publications (produced by their cheap PhD students). So are universities, which often receive government subsidies to fill their PhD spots." In fact, faculty members who are reluctant to perpetuate this cycle are punished in grant review, writing in costs for a research scientist at $80,000 per year when others have the same work done by a postdoc at $40,000 per year.
So, how did we get here? Part of the issue has to be that more people are going to college than ever before and the university system does not properly scale to the demand. In the US in 1970, only 11% of people over the age of 25 had a bachelor's degree, but this number had climbed to 28% by 2009. So more graduate students, postdocs and adjuncts are being used to teach the courses to accommodate all of these new students. While some claim that it is just too expensive to have tenure-track faculty teaching all of these courses, one must also consider the recent trend towards massive salaries for university professors.
Actually, if anyone could explain university economics to me, I'd be grateful.
And where do we go from here? Personally, I love the suggestions made by William Deresiewicz in this fantastic article. Particularly, "The answer is to hire more professors: real ones, not academic lettuce-pickers."
Tuesday, April 19, 2011
Gender and scientific success
I didn't want to write this post. I really don't want to touch this with a ten foot pole. What follows is messy and complicated and guaranteed to make everyone mad at least some of the time. (Ask Larry Summers).
We need a sane approach to how we deal with gender in the sciences.
Women are making measurable representation gains in the sciences. This is an undisputed good. Everyone benefits when the right people are doing the right job. However, despite the fact that the majority of bachelor's degrees are now being awarded to women, women only make up about 20% of professorships in math and the sciences. Why?
The three basic alternative answers: 1.) women tend not to choose careers in math or science (either willingly or due to life/family circumstances); 2.) women are barred from achievement in math and science through acts of willful discrimination; or 3.) women do not have the same aptitude for achievement in math and sciences as men.
This is a difficult issue to study as people's careers cannot be manipulated experimentally, and we are left to mostly correlational evidence. An exception are CV studies where identical CVs are given to judges with either a woman or man's name on the top. Judges are asked to determine the competence of the candidates. These studies typically find that the "male candidates" are judged to be more competent than the "female candidates". As no objective differences exist between them, this is a measure of sex discrimination.
Reviewing the correlational evidence for gender discrimination in the sciences, Ceci and Williams find that when examining researchers with equal access to resources (lab space, teaching loads, etc), that no productivity difference is found between male and female scientists. Female scientists are, on average, less likely to have as many resources as male scientists as they are more likely to take positions with heavier teaching loads. How to reconcile the CV studies showing discrimination and the correlational evidence suggesting none? In an excellent analysis of the Ceci and Williams paper, Alison Gopnik asserts a possible hypothesis: "Women, knowing that they are subject to discrimination, may work twice as hard to produce high-quality grants and papers, so that the high quality offsets the influence of discrimination".
It's possible. But Gopnik also admits that it is also possible that policy changes could be responsible. In other words, that affirmative action-style policies that give women advantages could counteract the subconscious gender discrimination seen in the CV studies.
There's a darker side to these policies, though. Some worry about the discounting of a female professor's abilities, assuming she rose to the position via policy rather than talent. Furthermore, some policies designed to given women more voice actually end up give them more work - if a certain number of women need to be on a committee, then female professors are doing more service work than their male counterparts.
And then there's the matter of why female faculty find themselves in low-resource situations to begin with. Stated eloquently by Gopnik, "the conflict between female fertility and the typical tenure process is one important factor in women's access to resources. You could say that universities don't discriminate against women, they just discriminate against people whose fertility declines rapidly after 35."
And well-meaning policies also interact with the fertility issue in insidious ways. For example, many universities offer to "pause" the tenure clock for a year for a faculty member who gives birth before tenure. Sounds great, right? It could be, except that there is a tremendous amount of pressure to not take this credit for fear of seeming weak. This is especially true in departments that have faculty members who have already chosen not to take the time.
So... we have unconscious discrimination, conscious policies to counter said unconscious discrimination, conscious and unconscious backlash against the policies, and a structural problem for female fertility. In other words, it's a complicated picture and I don't know what the answer is. I do, however agree with Shankar Vedantam's assessment: "It is true that fewer women than men break into science and engineering careers today because they do not choose such careers. What isn't true is that those choices are truly "free.""
We need a sane approach to how we deal with gender in the sciences.
Women are making measurable representation gains in the sciences. This is an undisputed good. Everyone benefits when the right people are doing the right job. However, despite the fact that the majority of bachelor's degrees are now being awarded to women, women only make up about 20% of professorships in math and the sciences. Why?
The three basic alternative answers: 1.) women tend not to choose careers in math or science (either willingly or due to life/family circumstances); 2.) women are barred from achievement in math and science through acts of willful discrimination; or 3.) women do not have the same aptitude for achievement in math and sciences as men.
This is a difficult issue to study as people's careers cannot be manipulated experimentally, and we are left to mostly correlational evidence. An exception are CV studies where identical CVs are given to judges with either a woman or man's name on the top. Judges are asked to determine the competence of the candidates. These studies typically find that the "male candidates" are judged to be more competent than the "female candidates". As no objective differences exist between them, this is a measure of sex discrimination.
Reviewing the correlational evidence for gender discrimination in the sciences, Ceci and Williams find that when examining researchers with equal access to resources (lab space, teaching loads, etc), that no productivity difference is found between male and female scientists. Female scientists are, on average, less likely to have as many resources as male scientists as they are more likely to take positions with heavier teaching loads. How to reconcile the CV studies showing discrimination and the correlational evidence suggesting none? In an excellent analysis of the Ceci and Williams paper, Alison Gopnik asserts a possible hypothesis: "Women, knowing that they are subject to discrimination, may work twice as hard to produce high-quality grants and papers, so that the high quality offsets the influence of discrimination".
It's possible. But Gopnik also admits that it is also possible that policy changes could be responsible. In other words, that affirmative action-style policies that give women advantages could counteract the subconscious gender discrimination seen in the CV studies.
There's a darker side to these policies, though. Some worry about the discounting of a female professor's abilities, assuming she rose to the position via policy rather than talent. Furthermore, some policies designed to given women more voice actually end up give them more work - if a certain number of women need to be on a committee, then female professors are doing more service work than their male counterparts.
And then there's the matter of why female faculty find themselves in low-resource situations to begin with. Stated eloquently by Gopnik, "the conflict between female fertility and the typical tenure process is one important factor in women's access to resources. You could say that universities don't discriminate against women, they just discriminate against people whose fertility declines rapidly after 35."
And well-meaning policies also interact with the fertility issue in insidious ways. For example, many universities offer to "pause" the tenure clock for a year for a faculty member who gives birth before tenure. Sounds great, right? It could be, except that there is a tremendous amount of pressure to not take this credit for fear of seeming weak. This is especially true in departments that have faculty members who have already chosen not to take the time.
So... we have unconscious discrimination, conscious policies to counter said unconscious discrimination, conscious and unconscious backlash against the policies, and a structural problem for female fertility. In other words, it's a complicated picture and I don't know what the answer is. I do, however agree with Shankar Vedantam's assessment: "It is true that fewer women than men break into science and engineering careers today because they do not choose such careers. What isn't true is that those choices are truly "free.""
Wednesday, March 9, 2011
The value of teaching at the university level
The Neuroskeptic has a particularly insightful post on the uncomfortable disconnect between how universities, academics and politicians see the role of teaching. I've written occasionally on some of the broken aspects of the academy, and I think Neuroskeptic's piece adds a couple of crucial thoughts to the discussion:
"And academics have no incentive to teach well and, in most cases, no incentive to make sure that their university has a reputation for good teaching."
(Emphasis mine.)
Indeed, if anything, being involved in excellent teaching is viewed as the "kiss of death" for one's tenure at many American research universities. And, as Neuroskeptic points out, the nomadic lives of young researchers prevents strong ties to a particular university:
"Until you get to the level of tenured professor, if ever, you cannot assume that you'll be working in the same place for very long. Many academics will go to one university for their undergraduate degrees, another for their masters, another for their doctorate, and then another two or three as junior faculty member before they "settle down" - and the majority don't make it that far."
Perhaps the solution is to tenure faculty more often and earlier. Imagine young, energetic, passionate academics, unafraid to teach with excellence and filled with a sense of place in their institution. Maybe this is what we need for excellent undergraduate education.
Wednesday, February 2, 2011
Is higher education the next bubble?
Here is a thought-provoking interview with Peter Thiel. Thiel is now offering a fellowship to entrepreneurial youth to NOT go to college.
"Education is a bubble in a classic sense. To call something a bubble, it must be overpriced and there must be an intense belief in it...Probably the only candidate left for a bubble...is education. It’s basically extremely overpriced. People are not getting their money’s worth, objectively, when you do the math. And at the same time it is something that is incredibly intensively believed; there’s this sort of psycho-social component to people taking on these enormous debts when they go to college simply because that’s what everybody’s doing."
"There are a few things that make it worse. One is that when people make a mistake in taking on an education loan, they’re legally much more difficult to get out of than housing loans. With housing, typically they’re non-recourse — you can just walk out of the house. With education, they’re recourse, and they typically survive bankruptcy. If you borrowed money and went to a college where the education didn’t create any value, that is potentially a really big mistake."
"You know, we’ve looked at the math on this, and I estimate that 70 to 80 percent of the colleges in the U.S. are not generating a positive return on investment. Even at the top universities, it may be positive in some sense — but the counterfactual question is, how well would their students have done had they not gone to college? Are they really just selecting for talented people who would have done well anyway?"
My own take is that the return on investment for college varies widely with what one studies in college. Yes, the skills gained from obtaining an English BA translate less directly into private sector skills than say, a computer science or engineering degree. But ultimately, we need to sit down and have a real conversation about how educated we want our country to be, both how broadly educated and how specialized. We also need to talk about how we want to scale the educational system and how we want to pay for it.
Saturday, January 15, 2011
What are you writing that will be read in 10 years?
This was a question asked to an acquaintance during a job interview for a professorship in the humanities. It's one hell of a question, and one that I find unfortunately unasked in the sciences.
In my other life, I submitted a paper this week. It's not a bad paper - it shows something new, but like too many papers being published today, it's incremental and generally forgettable. It's not something that will be read much in 10 years.
I love reading old papers. They are from a time when authors were under less pressure to produce by volume. They are consequently more theoretical, thoughtful and broad than most papers published today because the authors had the luxurious time to sit and think about the results, and place them in context.
As I've pointed out earlier, the competitive academic environment tends to foster bias in publications: when trying to distinguish oneself amongst the fray of other researchers, one looks for sexy and surprising results. So do the journals, who want to publish things that will get cited the most. And so do media outlets, vying for your attention.
Jonah Lehrer's new piece on the "decline effect" in the New Yorker almost gets it right. The decline effect, according to Lehrer, is the phenomenon of a scientific finding's effect size decreasing over time. Lehrer dances around the statistical explanations of the effect (regression to the mean, publication bias, selective reporting and significance fishing), and seems all-too-willing to dismiss these over a more "magical" and "handwave-y" explanation:
But randomness (along with the sheer number of experiments being done) is the underlying basis of the other effects he wrote about and dismissed. The large number of scientists we have doing an even larger number of experiments is not unlike the proverbial monkeys randomly plunking keys on a typewriter. Eventually, some of these monkeys will produce some "interesting" results: "to be or not to be" or "Alas, poor Yorick!" However, it is unlikely that the same monkeys will produce similar astounding results in the future.
Like all analogies, this one is imperfect as I am not trying to imply that scientists are only shuffling through statistical randomness. What I am saying is that given publication standards of large, new, interesting and surprising results, it is very likely that any experiment meeting these standards is an outlier and that its effect size will regress to the mean. This cuts two ways: although some large effects will get smaller, some experiments that were shelved for having small effects will probably have larger effect sizes if repeated in the future.
In my other life, I submitted a paper this week. It's not a bad paper - it shows something new, but like too many papers being published today, it's incremental and generally forgettable. It's not something that will be read much in 10 years.
I love reading old papers. They are from a time when authors were under less pressure to produce by volume. They are consequently more theoretical, thoughtful and broad than most papers published today because the authors had the luxurious time to sit and think about the results, and place them in context.
As I've pointed out earlier, the competitive academic environment tends to foster bias in publications: when trying to distinguish oneself amongst the fray of other researchers, one looks for sexy and surprising results. So do the journals, who want to publish things that will get cited the most. And so do media outlets, vying for your attention.
Jonah Lehrer's new piece on the "decline effect" in the New Yorker almost gets it right. The decline effect, according to Lehrer, is the phenomenon of a scientific finding's effect size decreasing over time. Lehrer dances around the statistical explanations of the effect (regression to the mean, publication bias, selective reporting and significance fishing), and seems all-too-willing to dismiss these over a more "magical" and "handwave-y" explanation:
"This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness"
But randomness (along with the sheer number of experiments being done) is the underlying basis of the other effects he wrote about and dismissed. The large number of scientists we have doing an even larger number of experiments is not unlike the proverbial monkeys randomly plunking keys on a typewriter. Eventually, some of these monkeys will produce some "interesting" results: "to be or not to be" or "Alas, poor Yorick!" However, it is unlikely that the same monkeys will produce similar astounding results in the future.
Like all analogies, this one is imperfect as I am not trying to imply that scientists are only shuffling through statistical randomness. What I am saying is that given publication standards of large, new, interesting and surprising results, it is very likely that any experiment meeting these standards is an outlier and that its effect size will regress to the mean. This cuts two ways: although some large effects will get smaller, some experiments that were shelved for having small effects will probably have larger effect sizes if repeated in the future.
This gets us back to my penchant for old papers. With more time, a researcher could do several replications of the study, and find the parameters under which the effects could be elicited. And often, these papers are from the pre-null-hypothesis significance testing days, so the effects tend to be larger as they need to be visually obvious from a graph. (A colleague once called this the JFO statistical test for "just f-ing obvious". It's a good standard) This standard guards against many of the statistical sins outlined by John Ioannidis.
This is also why advances in bibiometrics are going to be key for shaping science in the future. If we can formalize what makes a paper good, and what makes a scientist's work "good", then (hopefully) we can go about doing good, rather than voluminous, science.
Thursday, December 2, 2010
What is the real value of effective writing?
This shocking article, written by a man who makes a living writing college papers for other people, has had a lot of mileage around the web lately.
I had two immediate reactions to the essay: “I would love to invite this guy to a dinner party, he sounds really interesting” and “this is just another example of the profoundly broken economics of the American higher educational system”
“In the midst of this great recession, business is booming. At busy times, during midterms and finals, my company's staff of roughly 50 writers is not large enough to satisfy the demands of students who will pay for our work and claim it as their own.” Stated the pseudonymous Mr. Dante.
Alex Reid wrote about some economic observations from the article. “… it's a little sad that people who are clearly accomplished writers (to be able to produce quickly good academic material across the disciplines) are willing to work for such little pay.” This is in stark opposition to my own reaction, which was along the lines of “wow, I could increase my post-doc salary by a substantial margin by doing this!” And recall that despite my whining, my salary is quite reasonable when compared to my adjunct peers in the humanities. I whole-heartedly agree with Mr. Reid’s assertion that we should really start questioning our paradigms about college education.
To review: our students can’t afford not to get a college degree, and end up paying smart individuals who might otherwise be teaching them if it wasn’t more worth their while economically to pass them through the system. These students are in college because it’s just “what you do” to get a job that doesn’t involve flipping burgers. What kind of education do people need to have for a typical job? How do we best scale this to the largest number of people?
Of course, this ghostwriting problem is far from limited to the college population. This week’s Nature had this article about “editorial services” that help with everything from experimental logic to type setting that help researchers get work published. Such a service operates in a massive ethical gray area of authorship ethics – if a service organizes your ideas and suggests a critical control experiment, is that not a unique intellectual contribution?
Although both cases are very different, it is evident that the inability to clearly communicate one’s ideas is a primary barrier to academic and life success, and although we do not compensate teachers for this skill, its value is shown on the black market.
Sunday, October 17, 2010
How many published studies are actually true?
I’d like to point readers to this excellent new article in The Atlantic on meta-researcher John Ioannidis. Ioannidis is building quite the career on exposing the multiple biases in medical research. He has taken a field to task publishing papers with shy titles such as “Why most research findings are false”. He is rapidly becoming a personal hero of mine.
Ioannidis has examined and formally quantified research biases at all levels of “production”: in which questions are being asked, in the design of experiments, in the analysis of these experiments, and in the presentation and interpretation of the results. “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis in the article. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”
While I have examined some of these biases for both general research and fMRI experiments, it’s worth noting that in the context of medical research, the stakes are even higher as they affect patient care. It is also unfortunate that medical studies are, according to Ioannidis, more likely to contain bias as there are stronger financial interests vested in the results, compared to cognitive neuroscience.
An unfortunate result of the competitive research environment is a lack of replication of scientific results. Although replication is the gold standard of a result’s truth, there is little acknowledgment, and thus little motivation for researchers to do this, except for the most bold of claims. Without replication, bias in research increases. However, even when a failure to replicate a major study is published, it often gets very little attention. A case in point is the failure to replicate the “Mozart effect”: the finding that listening to 10 minutes of a Mozart sonata significantly increased participants’ performance on a spatial reasoning test. A quick Googling of “Mozart effect” will show you several companies selling you Mozart recordings to increase your child’s IQ, despite the failure to replicate.
It is very easy to get discouraged by this, after all, science should be a science, right? Ioannidis seems less discouraged, and reminds us of the following: “Science is a noble endeavor, but it’s also a low-yield endeavor… I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
Monday, September 20, 2010
Should we crowd-source peer review?
Peer review has been the gold standard for judging the quality of scientific work since World War II. However, it is a time consuming and error-prone process. Now, both lay and academic work is questioning whether the peer review system should be ditched in favor of a crowd-sourced model.
Currently, a typical situation from an author’s perspective is to send out a paper, and receive 3 reviews about three months later. Typically, the reviewers will not completely agree with one another, and it is up to the editor to decide what to do with, for example, two mostly-positive and one scathingly negative review. How can the objective merit of a piece of work accurately be judged on such limited, noisy data? Were all of the reviewers close experts in the field? Were they rushed into doing a sloppy job? Did they feel the need for revenge against an author that unfairly judged one of their papers? Did they feel like they were in competition with the authors of the paper? Did they feel irrationally positive or negative towards the author’s institution or gender?
And from the reviewer’s point of view, reviewing is a thankless and time-consuming job. It is often a full day’s work to read, think about, and write a full and fair review of a paper. It requires accurate judgment on all matters from grammar and statistics to a determination of future importance to the field. And the larger the problems the paper has, the more time is spent in the description of and prescription for these problems. So, at the end of the day, you send your review and feel 30 seconds of gratitude that it’s over and you can go on to the rest of your to-do list. In a couple of months, you’ll be copied on the editor’s decision, but you almost never get any feedback about the quality of the review from an editor, and very little professional recognition of your efforts.
The peer review process is indeed noisy. A study of reviewer agreement of conference presentations found that the rate of reviewer agreement was not different from chance. In a study described here, women’s publications in law reviews were shown to have more citations than mens’. A possible interpretation of this result is that women are treated harsher in the peer review process, and as a consequence publish (when they can publish) better quality articles than men who do not have the same level of scrutiny.
In peer review, one must also worry about competition and jealousy. In fact, a perfectly "rational" (Machiavellian) reviewer might reject all work that is better than his own for the purpose of advancing his career. In a simple computational model of the peer review process, it was found that the ratio of either "rational" or random reviewers needed to be kept below 30% for the system to beat chance. It also concludes that the refereeing system works the best when only the best papers are published. One can easily see how the “publish or perish” system hurts science.
It is a statistical fact that averaging over many noisy measurements provides a more accurate answer than any one answer. Francis Galton discovered this when asking individuals in a crowd to estimate the weight of an ox. Pooling over noisy estimates works when you ask for one measurement from many people, or when you ask the same person to estimate multiple times. A salient modern example of the power of crowd-sourcing is, of course,Wikipedia.
In a completely crowd-sourced model of publication, everything that is submitted gets published, and everyone who wants to can read and comment. Academic publishing would be quite similar to the blogosphere, in other words. The merits of a paper could then be determined by the citations, track backs, page views, etc.
On one hand, there are highly selective journals such as Nature who reject more than half of submitted papers before they even get to peer review and finally publish 7% of submissions. In this system, too many good papers are getting rejected. On the other hand, a completely crowd-sourced model means that there are too many papers for any scientist in the field to keep up with, and too many good papers won’t be read because it’s not worth one’s time to find diamonds in the rough. Furthermore, although the academy far from settled on the matter of how to rate professors for hiring and tenure decisions, it is more unclear what a “good” paper would be in this system as more controversial topics would get more attention.
The one real issue I see is that without editors seeking out reviewers to do the job, I worry that the only people reviewing a given paper will be the friends, colleagues and enemies of the authors, and this could make publication a popularity contest. Some data bear out this worry. In 2006, Nature conducted an experiment on the addition of open comments to the normal peer review process. Of the 71 papers that took part in the experiment, just under half received no comments at all, and half of the total comments were on only eight papers!
So, at the end of the day, I do believe that with good editorial control over comments, that a more open peer-reviewing system would be of tremendous benefit to authors, reviewers and science.
Tuesday, September 14, 2010
Dispatches from the Academy
I have a weird and wonderful job. My job is to try to figure out things that have not yet been figured out, write about them, submit said writing to journals, and then argue with similar strange people until said words come out in print. The particulars of what I’m trying to figure out have nothing to do with making widgets, and almost nothing to do with deeply noble social causes such as curing cancer or Alzheimer’s disease. I am an academic, in the production of knowledge for knowledge sake.
There has been much recent criticism of the academy lately, primarily brought about by the publishing of Mark Taylor’s Crisis on Campus: a Bold Plan for Reforming Our Colleges and Universities
While American universities are not without their faults, the timbre of this argument has reached laughably hyperbolic heights such as "Graduate education is the Detroit of higher learning" or that the current university system is a "Ponzi scheme".
A Ponzi scheme? Seriously?
The uncomfortable truth at the heart of Taylor’s argument is that there are too few tenure track professorships for too many young Ph.Ds. This is true. When I left graduate school last year, there were about 100 graduate students for 40 faculty members. While a conservative sounding student-to-faculty ratio, it is far above the replacement level for each of those 40 faculty members, each of whom trained students before and after us. And although some of my cohort knew they wanted to go into industry, the vast majority of us were bent on the tenure track. Although it’s too early to know what will happen to us, it is safe to say that we have a few years of fierce competition ahead, and of necessity, many of us will be doing non-academic pursuits. But in my field (and other sciences), we did not incur extra student debt in grad school, and have picked up some math and computer skills that make us somewhat employable.
Graduate students in the humanities have a harder road, having to pay for their graduate educations, and often becoming part of an economic underclass of highly educated adjunct professors, earning $1000-$5000 per course a semester, without benefits. David Hiscoe described his experience as an adjunct as “five writing courses a quarter at $12,500 a year, slightly more than the average hourly wage I'd pulled down as a not-too-able carpenter's assistant during the summers when I should have been writing my dissertation.”
Taylor’s solution? Abolish tenure to kick out the lazy, old, irrelevant and expensive professors. The problem is that the economics of this argument don’t make any sense. Of course, a tenured professor is going to cost more than an adjunct. But the cost of this tenured professor is chump change compared to, say, landscaping, catering, the cushy salaries of university administrators, state-of-the-art athletic facilities and the salaries of football coaches.
Tenured professors (at least in my own field, it might be different in Taylor’s department of religion) are not lazy people. I believe that the difficulty of getting tenure selects out the people that are not intrinsically motivated for high achievement. By the time one’s tenure is decided, one has gone through 4 years of undergrad, 4-10 years of grad school, 1-6 years as a postdoc and 5-7 years as a non-tenured professor. You may get through a few years with the “eyes on the prize” mentality, but not half of your working life!
Furthermore, pressures that exist before tenure exist after tenure: research can only happen with funding from competitive grant proposals, and highly selective journals will not publish work that is irrelevant.
In anticipation of the counter-argument for tenure, Taylor speaks out against “academic freedom” by stating "If you don't have the guts to speak out before, you're not gonna have it after."
Academic freedom isn’t just about saying something controversial in the classroom, it’s about being able to take scientific risks. Many of the young professors I know, under the pressure to keep a certain publication volume, publish small, incremental pieces of work. This is not to say that it’s not good work, but it is safe work, and it is work that doesn’t radically change anyone’s world view. It’s work that, in the big picture, will be forgotten. In order to do important scientific work, one needs the ability to take some risks, to explore a set of experiments that might not work out, and to still have a job when and if these fail. Without a degree of job security, we will lose cutting edge research.
But as I disagree with these major points from Taylor, I do see that there are major problems in American research universities. Chief among them is a lack of importance on teaching. The weighting of teaching in the tenure decision varies from university to university, but runs from indifferent to disdained. I recall with sadness the anxiety that my graduate advisor had over receiving a teaching award, it being seen as a "kiss of death" for tenure.
I want to be the professor who values teaching, because it does more good in the world than research alone. At the end of a long and venerable research career, one’s life’s work will be scarcely more than a paragraph in an introductory textbook, but teaching well affects students for a lifetime.
One point that no one seems to acknowledge in these debates over the future of universities is this: the prospect of becoming a tenured professor is a dream much like that of becoming a rock star. Both professions have demand that overflows the market. Both professions afford a lifestyle of creative freedom. And in both professions, you will find young people putting off creature comforts just for the opportunity to try, whether it is toiling in a wedding band, or being an adjunct instructor for $3000/semester. It’s not the safest bet, but I still can’t think of anything else I’d rather be doing.
Subscribe to:
Posts (Atom)