Let’s fight some more about the digital humanities!

Nan Z Da’s “Computational Case Against Computational Literary Studies” (CLS) in the latest Critical Inquiry has been making the rounds on my social media feed. It’s a thorough and inventive argument and I am impressed by its doggedness, cross-field erudition and commitment to its idea: she re-did studies, chased down data sets, and reconstructed analyses. My long critique below is simply a result of my being impressed enough to care about some subtleties of the argument. Because I have seen disagreements turn into blood sport in literary studies before, let me be crystal clear: nothing I say below should indicate anything other than admiration the author or her work.*

So, what’s my concern?

I think the critique overshoots its mark in claiming that because there are errors in the data science, data science should be greeted with suspicion by literary critics. I am about to publish a co-authored paper on some of the inflated claims regarding machine learning and audio mastering, so I am sympathetic to Da’s skepticism as a general stance, but I’m concerned about how it works out in this case.

For those who haven’t read it, Da’s article proceeds by careful readings of a few CLS texts in order to argue with their modes of statistical interpretation and their relevance for literary criticism. I’m not going to dispute any of the statistical criticisms offered in the essay, because for me the main issue is how humanists should think about computation, quantification, and truth standards. (And I expect that the CLS crowd will offer its own response, and leave it to them to defend themselves.)

I also think Da is asking the right question, which is to be posed to any new movement in scholarship: what does it contribute to the conversation beyond itself? This is especially true if a field claims to displace another. In other words, your burden of proof is higher if you argue that quantification should displace other modes of literary interpretation than if you argue that quantification can be useful along side other modes of literary interpretation. I am firmly in the latter camp.

So, my issues with the piece really come down to two places:

1. What are the standards to which we want to hold humanities work? The warrant behind the main arguments of the piece: the claims of CLS do not stand up to statistical scrutiny, or are artifacts of data mining, or if the results are true, they are banal. The problem is that no humanistic hermeneutic enterprise, apart from maybe some species of philology and bibliography, could actually withstand the burdens of proof implied by Da’s critique. Da’s suggestions for reviewing CLS work at the end of the appendix also suggest a kind of double standard for quantitative and qualitative work in literary studies:

1.) Request Complete Datawork: names of databases, raw texts if non-proprietary, scraping scripts, data tables and matrices, scripts used for statistical analysis and those results. Indication of whether codes are original/proprietary or not. 
2.) Request detailed documentation for replicating the results either on the original or on a similar dataset. Authors should be able to demonstrate the power of their model/test under good faith attempts at replicating similar circumstances. 
3.) Enlist someone to make sure the authors’ scripts run.
4.) Enlist a statistician to a.) check for presence of naturally occurring, “data-mining” results, implementation errors, forward looking bias, scaling errors, Type I/II/III errors, faulty modeling, straw man null hypothesis, etc; b.) see if datawork is actually robust or over-sensitive to authors’ own filters/parameters/culling methods; c.) see if insights/patterns are actually produced by something mechanical/definitional, d.) apply Occam’s Razor Test—-would a simpler method work just as well?
5.) Enlist a non-DH literary critic or literary historian to see if the statistical results actually lead to the broader inferences/claims that matter to literary criticism/history or only do so wishfully/willfully/suggestively/by weak analogy; and to see if, in papers that seek only to perform basic tasks, human classification & human description would not actually be faster (and far more accurate) for both the dataset in question and new ones. 
6.) Apply “smell test”— is the work, minus the computational component, well-written, original, and insightful as a literary-critical or literary-historical argument? Would the argument be published without the computational elements? The “benefit of the doubt for fledgling field” should not apply.

Again, few works of literary criticism could survive this level of scrutiny: imagine every literary historical claim having to please a history-department historian, or every psychoanalytic claim having to satisfy a psychology professor. Imagine having every interpretative claim having to satisfy someone hostile to that mode of interpretation (in fact, these kind of political concerns are already an issue with reviewing, where doxa often prevents good work from surfacing in certain places). Or imagine every work of speculative hermeneutics having to pass muster with a statistical analysis.

I won’t speak for what CLS has given literary study, but at least in my field, quantitative scholarship has produced lots of important work. To give but one example: before the media theory types (like me) got into writing about media and disability, quantitative scholars were already churning out articles documenting a range of issues around access, power, procedure and policy. They identified and laid out a problem that the so-called critical and interpretive people were slow to identify and acknowledge. I have my theories about why that is, but the key thing here is that whatever limitations a “critical” humanist might attribute to quantitative analysis, those epistemologies helped a group of scholars to identify a problem systematically before the self-described “more critical” people did. Each method brings with it biases and limitations but also produces openness to important questions.

I bristle when I hear people ignorant of the humanities or interpretive social science refer to work that doesn’t use numbers as “not empirical.” But given literary criticism’s own fraught histories, well documented and unearthed by its own practitioners (Williams, Said, Sedgwick, etc), I am equally uncomfortable with the opposite bias.

Onto technology.

On page 620, Da argues: “This is not at all to argue that literary analysis must have utility—in fact I believe otherwise—but if we are employing tools whose express purpose is functional rather than metaphorical, then we must use them in accordance with their true functions.” And later: “Text mining is ethically neutral.”

I agree that applying the measure of utility to literary analysis would be a bad thing. But both the other claims seem bizarre to me. First, the humanities routinely adopt and abuse methods from other fields to their ends, from Art Historians’ adaptation of slide projectors and slideware designed for business presentations, to literary critics’ adoption of psychoanalytic theories that are mostly discounted in contemporary psychology. That the use of a data science method is different in literary studies than in its home field is not itself an issue; in fact, I would hope that it is different. The claim about ethical neutrality is contravened by the examples: investing is not ethically neutral, so data mining to invest better is not ethically neutral. Ditto for legal discovery, given how the law works in practice. In fact, I know of no serious scholar of technology who would claim that any technology or technical process is inherently “neutral.”

Da claims CLS “has developed literary metaphors for what coding and statistics actually are and involve, turning elementary coding decisions and statistical mechanics into metanarratives about interpretive choice.” The implication is that they’re wrong to do this, but they are actually correct to do this. There is a long tradition in the history of statistics of thinking about the politics of statistics in terms of interpretation (see Ted Porter or Ken Danzinger or Mara Mills’ forthcoming discussion of the statistical construction of normal hearing); this is readily acknowledged by people who work in signal processing (eg, how to represent the behaviour of a spring reverb in an algorithm); and many authors in New Media Studies and Science and Technology Studies have shown that interpretation is heavily bound up with quantification and coding (some of the most important work in that are right now is appearing through Data & Society and Artificial Intelligence Now).

Finally, a small correction. The grant monies Da calculates from Andrew Piper’s CV in footnote 5 may seem almost impossible to American readers. I read the number as intended to indicate that it’s a waste of money to spend so much on data analytics, as a swipe at CLS. But while Piper is no doubt quite successful at getting grants, one can find equally successful Canadian humanists who have brought in similar or even greater amounts for no computational work whatsoever. One can peruse the online CVs of other humanities profs in Canada for comparison. In fact, most of this money winds up in the hands to students (or sometimes postdocs) who are hired as researchers (or whose degrees are funded). A relatively small part of it is for equipment or database access.** It becomes part of graduate funding, and I and my students have benefitted tremendously from this system (I also coauthor with my students frequently as a result). While DH may get the lion’s share of humanities funding in the US (I don’t know), this is not actually the case in Canada or at McGill University. Canada’s investment in humanities research isn’t perfect and it raises some issues I don’t like, but Critical Inquiry ought to be celebrating the use of public money to pay humanists to do research.

Do my concerns here detract from the force of Da’s critique of CLS? That depends on how you read them. But as we critique stuff, we also have to think carefully about what we are arguing for. I would like to see humanities that are heterodox, open to experimentation, and curious about approaches that are wildly different from their own.

Notes.

*Discloures and disclaimers: Andrew Piper is a friend and a colleague, and Richard Jean So is a new hire at McGill who I’m excited and pleased to have. I am much less fired up about the quantitative turn in literary studies than either of them, though I’m always happy to hear about their work. Most of my work is historical, interpretative, philosophical and ethnographic. I am, however, the child of two quantitative social scientists and am “good at statistics,” and I work in a field that is home to both quantitative and qualitative researchers.

**The exception here is Canada Fund for Innovation Grants. But those are a special kind of hell. My failed CFI application to build a crossdisciplinary multimodal multimedia lab ranks as the second worst experience of my career as a scholar, from undergrad to the present day. Anyone who actually wins one has surely earned it.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.