LaCour and the Neoliberalization of the Academy (Update 6/5)

I’m not a political scientist, and I had somehow missed the initial wave of press for the now-retracted Michael LaCour/Donald Green article in Science on the persuasive power of canvassers to increase support for gay marriage. So I can’t really discuss what the findings, had they been legitimate, import for the issue of gay marriage, for the general prospect of canvassing as a persuasive technique, or for experimental methods in the social sciences.

But as an academic, I’ve been pretty much unable to avoid the controversy that has erupted so quickly about the apparent fraudulence of both the study and of LaCour’s academic credentials. I’m trying to avoid gratuitous schadenfreude at LaCour’s downfall. There’s certainly enough of that already. But one thing I’ve noticed that bears some analysis is the way that critiques from both the non-academic right and from within the academy miss the point.

For right-wingers, the low-hanging fruit has been LaCour’s focus on public opinion on gay marriage. Although the impact of the article in disciplinary terms touched on research methods, and in broader political terms touched on general methods of persuasion, conservatives have identified LaCour with a normative support for the liberal cause of gay marriage, and, in commentary that I don’t feel like excerpting, have suggested that academic research is simply propaganda for gay/environmental/minority/feminist causes.

And, within the academy, concern has centered on the personal integrity of LaCour, his co-author Green, and the peer reviewers who apparently failed to exercise critical judgment over the findings and methods. There’s something to this, and there’s certainly resentment to spare for the young academic who achieved mainstream media attention and a tenure-track job at Princeton. Tom Bartlett’s article here at the Chronicle is eye-opening for putting the scope of LaCour’s apparent deception in a concise summary. If Bartlett’s right, this was a complex and well-orchestrated long con. And, if you can’t cheat an honest man, its success raises questions about those proximate to the con:

Who, if anyone, was supervising Mr. LaCour’s work? Considering how perfect his results seemed, shouldn’t colleagues have been more suspicious?

But the elements of that con are noteworthy in that, had they been legitimate achievements, they would reflect perfectly an emergent script for professionalization in the neoliberal academy. Bartlett hints at this more structural problem

Is this episode a sign of a deeper problem in the world of university research, or is it just an example of how a determined fabricator can manipulate those around him?

but orders his clauses in a way that draws attention to the latter half of the sentence. It’s the front half that ought to concern us.

I’ve just been reading Wendy Brown’s Undoing the Demos: Neoliberalism’s Stealth Revolution (about which more in the future), and, among many other areas the book discusses, its evaluation of professionalization in an academy shaped by the values of the market is astute.

One irony of neoliberal entrepreneurialism and debt-financed investment is that it often draws producers and investors into niche industries and products that are unsustainable over time-derivatives, bubble markets, and so forth. Current norms and metrics for academic success are an example of this. Faculty gain recognition and reward according to standing in fields whose methods and topics are increasingly remote from the world and the undergraduate classroom. Graduate students are professionalized through protocols and admonitions orienting them toward developing their own toeholds in such fields. This professionalization aims at making young scholars not into teachers and thinkers, but into human capitals who learn to attract investors by networking long before they “go on the market,” who “workshop” their papers, “shop” their book manuscripts, game their Google Scholar counts and “impact factors,” and above all, follow the money and the rankings. “Good investment” is the way departments speak of new hires, and “entrepreneurial” has become a favored term for describing exceptionally promising young researchers; it is deployed to capture both a researcher’s capacity to parlay existing accomplishments into new ones and the more quotidian business of grant getting. These commonplaces in the sciences, social sciences, business, and law schools will soon dominate the entirety of university and scholarly activity. (195)

Back to Bartlett’s account, then. What’s amazing about it, read against Brown’s summation of neoliberal professionalization, is how many of the bases LaCour’s story illustrates. If Wendy Brown, or any critic of the neoliberal academy/political science within it, wanted to find a better parable, they’d have to, uh, fabricate it themselves.

Let’s start with the attention given to LaCour’s initial findings. Ethnographers have long discussed how social interactions can highlight differences in perspective and modify behavior. I don’t think that any of them would find the conclusion that canvassing can produce some shifts in opinion to be controversial (though a good ethnographer would find it extremely simplistic). LaCour’s and Green’s article got traction because it was published in Science and assumed the mantle of experimental quantitative methodology. As Malcolm Gladwell, Freakonomics, and sports analytics have demonstrated, the media has an insatiable appetite for any analysis that can be expressed in a number, and to be “data driven” is a lazy, but widely accepted synonym for credibility. As the New York Times reported on LaCour and Green’s article on publication, their method was more reliable than the fuzzy suppositions of psychologists about persuasion:

Psychologists have long suspected that direct interaction, like working together, can reduce mutual hostility and prejudice between differing groups, whether blacks and whites or Christians and Muslims. But there is little evidence that the thaw in attitudes is a lasting one.

The study, published Thursday by the journal Science, suggests that a 20-minute conversation about a controversial and personal issue — in this case a gay person talking to voters about same-sex marriage — can induce a change in attitude that not only lasts, but may also help shift the views of others living in the same household. In other words, the change may be contagious. Researchers have published similar findings previously, but nothing quite as rigorous has highlighted the importance of the messenger, as well as the message.

It’s not an accident, therefore, that LaCour’s credibility hinged on his association with Green, who, as Bartlett notes,

 is well known for pushing the field to become more experimental.

What does that mean in practice? Per the Times,

Mr. LaCour and Donald P. Green, a professor of political science at Columbia, designed an experiment that mimicked a drug trial. They recruited 972 voters from these precincts, broadly surveyed their attitudes — including on same-sex marriage — and then randomly assigned them to receive either the “treatment” or a placebo.

There’s of course nothing inherently wrong with this approach, but praising the importation of methods from pharmaceutical trials is problematic. The placebo/treatment model even in medical research can yield highly dysfunctional results if the complex interaction of multiple factors is not accounted for, a tendency that is exacerbated by the fact that the purpose of most of these studies is to bring drugs to market. And, of course, people don’t respond to social interaction in the way they respond to pharmaceuticals. Yet, despite these weaknesses, the application of industry methods is regarded as inherently “rigorous.”

Which means that, structurally, studies that produce counterintuitive findings through the application of the sort of methods that are emerging as “best practices” through transferal into new fields and disciplines, are ripe to create the kind of buzz and “impact” that are valuable in the market. Consider Bartlett’s description of the piece’s media reception:

That paper, written with Donald P. Green, a professor of political science at Columbia University who is well known for pushing the field to become more experimental, had won an award and had been featured in major news outlets and in a segment on public radio’s This American Life. It was the kind of home run graduate students dream about, and it had helped him secure an offer to become an assistant professor at Princeton University. It was his ticket to an academic career, and easily one of the most talked-about political-science papers in recent years. It was a big deal.

Why would LaCour initiate this kind of fraud? Because he clearly recognized that a fraudulent publication, if not discovered, would accomplish as much for him in terms of professionalization and his own human capital as an academic as would a legitimate one.

Other parts of LaCour’s long con flow from this basic shift toward market norms in academe. If the value of research is measured by the money thrown at it in the form of grants, forcing the social sciences to catch up to fields with close and lucrative ties to industry, then the professional value of grants increases. LaCour apparently went on the job market with nearly $800,000 in research grants listed on his CV. Grants that he never received, in some cases from organizations that don’t actually exist. And, while on one level, it’s tempting to dismiss this sort of lying as pathological, is it, really? LaCour was astute enough to observe that claiming the grants was the next best thing, and in one of those displays of hubristic recklessness that came thiiiiis close to being converted by success into genius, he doubled down on–and nearly cleaned up on– the bet that no one would check. When entrepreneurialism becomes the highest value in academe, junior scholars are encouraged to pad their human capital portfolios. When funding for research is devolved to proliferating foundations and agencies outside the walls of the university, the better to encourage that competitive entrepreneurialism, is it any surprise that, like subprime mortgages in a derivative, the individual grants get tough to track down for due diligence?

Back to Bartlett. Where, then, do we go from here après LaCour? What is there to be learned?

Several of those who have worked with Mr. LaCour say that they are still waiting for an explanation, that they hope he will answer some of the outstanding questions. There are a host of factual issues — like why he didn’t try harder to obtain funding, or why he offered multiple accounts of what happened to the raw data — but the overriding question is simple: Why do this in the first place? Was it ambition run amok? Was it one minor deception that grew into a tapestry of falsehoods?

I don’t doubt that if Michael LaCour elects to tell his own story in the future, the answers to those questions might prove very interesting, even if only from a voyeuristic point of view or for schadenfreude. 

Maybe though, instead of continuing to interrogate Michael LaCour, or to ritually stone him for his crimes, we should consider saving some rocks for the institutions that enabled him and that articulated the incentives that drove his fraud in the first place. If LaCour’s story is How to Succeed in Academe Without Really Researching, then understanding what exactly constitutes “success” ought to be the first step.

UPDATE 6/5:

This article by Steve Kolowich in the Chronicle illustrates another point of connection to the problem that adapting the methodology of a pharmaceutical trial to social science research sometimes entails adopting both the impacts and the purposes of pharmaceutical trials. To wit, the experiments may in fact influence rather than simply evaluate political behavior, and, more troublingly, the method accordingly sets itself up for sponsorship by organizations outside academe who are willing to pay for whatever possible impact the experiment creates. While researchers may genuinely approach whether canvassers can persuade skeptics to vote for gay marriage as a research question, if this method gets adopted, it will increasingly give power to the funders to determine the questions based on the instrumental value of the impacts.

Kolowich is attuned to the disciplinary shifts in political science toward experimental methods:

Still, the fact that Mr. LaCour had embraced an experiment-based approach to political-science research was no accident. Such methods have “captured political science’s imagination,” says Arthur Lupia, a professor at the University of Michigan at Ann Arbor, especially among ambitious young scholars

And, he also explains one aspect of the institutional relations driving the shift: as academic researchers are pushed by neoliberal adminstrative rationality to become “entrepreneurial,” the rational response is to align with sources of funding and to accommodate to their priorities. In some fields, the logical partnership is with industry. For political scientists, the logical partners are parties and increasingly wealthy issue advocacy groups. Kolowich:

The embrace of field experiments in political science marks a historical shift, says Mr. Lupia. For centuries, scholars relied on qualitative methods, and later statistics, to understand how politics works. Since the turn of the century, however, scholars have increasingly teamed up with campaigns and political-action groups to run field tests on the American electorate.

Isn’t it proper for political scientists to impact electoral politics? Kolowich quotes Notre Dame Associate Professor David Nickerson, who enthuses about the way that involving researchers in campaigns is a win-win.

“The fact that you’re looking at real-world outcomes and working with real-world organizations means that you’re going to have a more direct effect and policy significance.”

But as with many questions of academic freedom and public involvement, the question really hinges on the means by which academics are engaging the public, and how the research questions are formed. If researchers are along for the ride measuring the impact of poll-tested issues or phrases developed by campaigns, how free is the inquiry? And how beneficial to society is the public outreach of academics?

Here’s where I don’t think Kolowich really gets it, because, after arguing that the discipline of political science should come up with rules and standards for how much research can interfere with the ecosystem of politics, he defines the question as one of research ethics:

The unraveling of Mr. LaCour’s study and the problems with the Montana experiment have important differences. The Montana case has to do with the question how much of a footprint researchers ought to leave. Meanwhile, “the LaCour story has really nothing to do with experiments,” says Mr. Krosnick. “It’s a guy who made up data.”

And yet the alleged sin, in both cases, has been deception. In politics anything goes; not so in academe. Researchers who take the tools of experimentation out of the laboratory and into the field walk a fine line between observing the game, and playing it.

Again, what this misses is the way that the institutional transformation of academe has created significant pressures for researchers to seek out sources of funding and align themselves with the questions that matter to the funders. Put it another way: The Supreme Court decision in Caperton v. Massey involved a West Virginia Supreme Court decision in a mining industry controversy. The US Supreme Court got involved because Don Blankenship, the owner of Massey Energy, inserted himself into a campaign for a state Supreme Court seat by paying for ads insinuating that one candidate was friendly to child molesters. These ads contributed to the election of the Massey-friendly candidate and Massey prevailing in the lawsuit (the USSC ultimately ruled he should have recused himself). Now, think of what could be learned if political scientists were brought on board to participate in Blankenship’s independent media blitz? We might be able to learn from the inside precisely how much voters are persuaded by ads suggesting a candidate will give free rein to pedophiles. If political scientists had been brought on board by Karl Rove for the 2000 South Carolina primary, maybe we could have learned from the inside exactly how much voters were influenced by Rove’s push polls that insinuated John McCain had fathered a black child out of wedlock. In the coming election cycle, political scientists could run all kinds of experiments to see whether voters respond to the message that Hillary Clinton murdered Vince Foster or that the government is using chemtrails for mind control.

We could know. Would we be better off? Would this indicate the discipline of political science engaging with the public or simply with interested non-academic actors?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s