“Look! Now you can…”: Gadget Logic in Big Data and the Digital Humanities

One of the problems with the move to digital humanities and big data is a kind of “gadget logic” taken from the advertising rhetoric of consumer electronics.  Lots of the reportage around digital humanities work on the big data side of the field focuses on what computer could do that people couldn’t.  By that I mean there is a “look! now you can ______” rhetoric that to my ears sounds exactly like an ad for a new consumer electronics product.  “Look, now you can update your friends on what you’re doing in real time” is not all that far from “look, now you can manipulate data in this new exciting way.”

The point, as a friend put it to me recently, is not whether you can do this or that kind of analysis, but whether you do carry it out and whether it tells us anything we don’t know.

Take this piece from The Awl for instance.  It’s all very cool, except for the following problems:

1.  The two main examples are feats of good research, but at least as summarized, they generate no substantial new knowledge OR don’t transform ongoing debates scholars are having — which is what I look for when reading new material (though I suspect that a reading of the Lim book would suggest deeper knowledge than the article represents–I did have a look at the article).  I realize the actual work may be more sophisticated than the article, but since the article is claiming that there’s a revolution in knowledge, I’d say it’s got a burden of proof to demonstrate that claim.  We knew that press use of sources was biased toward men (and as a comment points out, they are also biased toward official sources put forward by PR departments and agencies, which likely means more men).  We also knew that presidential rhetoric got less flowery and pedagogical over the course of the 20th century.

2.  The method discussion in the journalism article does not even approach giving us something that would look like reproducible science. If it’s going to claim to be scientific, then this is a major failure.  If, on the other hand, we’re doing some kind of interpretive humanities work, that’s fine, but then they can’t claim the mantle of science.  They use words like “data mining” and “machine learning” and “automatic coding” but good luck trying to figure out what judgments were encoded into their software and whether you might want to make different ones if you were to do the study yourself.

I agree with the authors that

Our approach — apart from freeing scholars from more mundane tasks – allows researchers to turn their attention to higher level properties of global news content, and to begin to explore the features of what has become a vast, multi-dimensional communications system.*

But that begs two questions: will that attention to higher-level properties of a phenomenon yield greater or more profound insight?  Only if scholars know how to ask better questions, which implies a greater sophistication with both theory and synthetic thought–two areas where current AI is woefully lacking.  There is also the question of the degree to which scholars should be free of direct engagement with the data.  Another group of people in the digital humanities seems to argue that you can’t be a real scholar anymore if you can’t code.  As I suggested yesterday, that’s a silly proposition, but people should of course know how stuff works, which would include their data.

Synthetic, abstract, large-scale work requires a good thinker needs to move between scales–something Elvin Lim appears to do more effectively in the article’s representation of The Anti-Intellectual Presidency.  The description makes it look more like real scholarship to me because Lim utilizes digital methods when they suit the specific question he’s asking, and utilizes other approaches when called for, and approaches the topic from multiple registers.  If you’re hearing echoes of what I said yesterday about writing tools, you’d be right.

Good work comes from good questions, and not shiny new tools.  In the hands of a skills craftsperson, a new tool and yield beautiful and unexpected results, and that’s ultimately what we want.  But we can also sometimes get those from old tools.

See also: musicians and instruments, surgeons and their instruments, drivers and cars, cooks and kitchens.

We also need to question the value of speed as a universal good.  For some things, like getting out an op-ed, speed is good.  But for certain kinds of scholarship, slowness is better–the time with a topic helps you to understand it better and say smarter things.  If everyone can write books and articles twice as fast as they used to–or faster–nobody will be able to read them and keep up, except for machines.

Oh, wait.  That already happened.

As one of my teachers told me, the hardest thing to do after jumping through all the institutional hoops is to remember why you got into academia in the first place.  But that’s also the most important thing.  Much as I love talk of and experimentation with new tools and processes, I worry that some of the DH discussions slide over into old fashioned commodity fetishism, and loses track of the purpose of the work in the first place.  As my friend said, it’s not whether you can that matters in scholarship.  It’s what you do.

*I disagree with the authors’ assertion that the modern media system is more complex and multilayered than earlier systems.  Have a look at Richard Menke’s Critical Inquiry essay on the Garfield assassination (which I just taught last week).  Sure, it’s different, but from a perspective of cultural analysis there is every bit as much complexity (perhaps more since less it automated and systematized) though people walking around and writing on giant bulletin boards isn’t really well enough archived to be data mined.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.