London (no, the one in Ontario)

When I first moved to Montreal in 2004, people would tell me they were going to “London” and I’d get excited for them because I thought they were going to the UK.  Then they’d say “London, Ontario” and it would be slightly disappointing.  Nine years later, last night, I’m checking in at the airport to fly to London, Ontario for the first time and the woman says to me “can I see your passport?”  When I explain it’s Ontario, she said “oh, sorry” and gave me a disappointed look.

Some cosmic loop has just closed.

“Look! Now you can…”: Gadget Logic in Big Data and the Digital Humanities

One of the problems with the move to digital humanities and big data is a kind of “gadget logic” taken from the advertising rhetoric of consumer electronics.  Lots of the reportage around digital humanities work on the big data side of the field focuses on what computer could do that people couldn’t.  By that I mean there is a “look! now you can ______” rhetoric that to my ears sounds exactly like an ad for a new consumer electronics product.  “Look, now you can update your friends on what you’re doing in real time” is not all that far from “look, now you can manipulate data in this new exciting way.”

The point, as a friend put it to me recently, is not whether you can do this or that kind of analysis, but whether you do carry it out and whether it tells us anything we don’t know.

Take this piece from The Awl for instance.  It’s all very cool, except for the following problems:

1.  The two main examples are feats of good research, but at least as summarized, they generate no substantial new knowledge OR don’t transform ongoing debates scholars are having — which is what I look for when reading new material (though I suspect that a reading of the Lim book would suggest deeper knowledge than the article represents–I did have a look at the article).  I realize the actual work may be more sophisticated than the article, but since the article is claiming that there’s a revolution in knowledge, I’d say it’s got a burden of proof to demonstrate that claim.  We knew that press use of sources was biased toward men (and as a comment points out, they are also biased toward official sources put forward by PR departments and agencies, which likely means more men).  We also knew that presidential rhetoric got less flowery and pedagogical over the course of the 20th century.

2.  The method discussion in the journalism article does not even approach giving us something that would look like reproducible science. If it’s going to claim to be scientific, then this is a major failure.  If, on the other hand, we’re doing some kind of interpretive humanities work, that’s fine, but then they can’t claim the mantle of science.  They use words like “data mining” and “machine learning” and “automatic coding” but good luck trying to figure out what judgments were encoded into their software and whether you might want to make different ones if you were to do the study yourself.

I agree with the authors that

Our approach — apart from freeing scholars from more mundane tasks – allows researchers to turn their attention to higher level properties of global news content, and to begin to explore the features of what has become a vast, multi-dimensional communications system.*

But that begs two questions: will that attention to higher-level properties of a phenomenon yield greater or more profound insight?  Only if scholars know how to ask better questions, which implies a greater sophistication with both theory and synthetic thought–two areas where current AI is woefully lacking.  There is also the question of the degree to which scholars should be free of direct engagement with the data.  Another group of people in the digital humanities seems to argue that you can’t be a real scholar anymore if you can’t code.  As I suggested yesterday, that’s a silly proposition, but people should of course know how stuff works, which would include their data.

Synthetic, abstract, large-scale work requires a good thinker needs to move between scales–something Elvin Lim appears to do more effectively in the article’s representation of The Anti-Intellectual Presidency.  The description makes it look more like real scholarship to me because Lim utilizes digital methods when they suit the specific question he’s asking, and utilizes other approaches when called for, and approaches the topic from multiple registers.  If you’re hearing echoes of what I said yesterday about writing tools, you’d be right.

Good work comes from good questions, and not shiny new tools.  In the hands of a skills craftsperson, a new tool and yield beautiful and unexpected results, and that’s ultimately what we want.  But we can also sometimes get those from old tools.

See also: musicians and instruments, surgeons and their instruments, drivers and cars, cooks and kitchens.

We also need to question the value of speed as a universal good.  For some things, like getting out an op-ed, speed is good.  But for certain kinds of scholarship, slowness is better–the time with a topic helps you to understand it better and say smarter things.  If everyone can write books and articles twice as fast as they used to–or faster–nobody will be able to read them and keep up, except for machines.

Oh, wait.  That already happened.

As one of my teachers told me, the hardest thing to do after jumping through all the institutional hoops is to remember why you got into academia in the first place.  But that’s also the most important thing.  Much as I love talk of and experimentation with new tools and processes, I worry that some of the DH discussions slide over into old fashioned commodity fetishism, and loses track of the purpose of the work in the first place.  As my friend said, it’s not whether you can that matters in scholarship.  It’s what you do.

*I disagree with the authors’ assertion that the modern media system is more complex and multilayered than earlier systems.  Have a look at Richard Menke’s Critical Inquiry essay on the Garfield assassination (which I just taught last week).  Sure, it’s different, but from a perspective of cultural analysis there is every bit as much complexity (perhaps more since less it automated and systematized) though people walking around and writing on giant bulletin boards isn’t really well enough archived to be data mined.

Writing: Tools of the Trade

A Facebook conversation started by Tara Rodgers got me thinking about the tools we use as writers.  Obviously, the creative process is highly idiosyncratic and personal at some level even though none of us are as unique as we think we are.  Try everything; select what works.  The proliferation of new tools is exciting but it can also overwhelm, or worse, tempt people to contort their working practices to fit new tools when the old ones worked just as well.

There are tons of alternatives to MS Word now, and of course also lots of alternatives for bibliographic software.  There are also programs like Evernote and Scrivener which academics find incredibly useful.  My time with Scrivener was a disaster.  Part of it was impatience.  I’ve been using Word since sometime in the early 1990s (how long? Before Word, I wrote my BA thesis in the Leading Edge Word Processor, which lacked footnote functionality).  So the idea of trying to write while learning the quirks of a new program would have to lead to a massive payoff for me.  For me, writing is at its most essential a creative and synthetic process.  For me, the goal of writing technology is therefore to get into the “zone” as fast as possible (where I’m thinking about the writing and not the tool) and that means a high level of instrumental comfort with the tools.  The organizational powers of something like Scrivener are not only unnecessary for me, I think they get in the way of my practice as a writer.

That’s not to say Word is without its problems.  I think I mostly manage it by ignoring most of its functions and compartmentalizing the interface as much as possible.  It’s customized for nobody, instead manifesting itself as distended bloatware.  After not bothering to look and discovering that the Audible Past clocked in over 160,000 words right before final submission, I foolishly wrote MP3 as a single file, thinking it would help me keep track of the text’s size.  It didn’t make it any smaller, and Word did not like a 140,000 word file (I eventually got back down around 120,000 words).  Neither did Endnote, my bibliographic software.

A lot of the extant writing advice online seems to advocate more teching up.  In Tara’s conversation, Jentery Sayers offered a link to some “how to” advice from Bill Turkel.  The advice about backup and digitizing as much as possible is absolutely great.  I use Apple’s Time Machine, which isn’t an awesome backup but it’s sufficient.  Every few months I take an extra hard drive, clone the time machine drive, and take it to school.  I also use Dropbox (any cloud service is fine) and pay for extra storage space.  My music collection isn’t up there, but there’s lots of room for pretty much everything I write as well as the large set of “incidental writings” we do as academics ranging from course materials to letters of rec.  Basically, everything that goes in a “Documents” folder goes in Dropbox instead, which is organized inside as a “documents” folder would be–by type of project/task and by date.

Audible Past was a file cabinet full of stuff I had to reorganize on my floor, over and over again.  In grad school I had nightmares about house fires from time to time, and a friend even gave me a metal lockbox with a copy of his dissertation field notes.  MP3 was a few folders, plus a labyrinth of digital files, many of which were searchable.  I still printed things out to mark them up (like the oral historical interviews) but the physical copies were incidental to the process.  The archival copy is digital, and backed up in multiple ways.

That said, I’m not convinced like Turkel is that everyone and her sister needs to know how to program to do research.  A lot of what we do isn’t about making things but rather interpreting, synthesizing, and representing.  Programming can help with that but it is not the only or necessarily the best path for everyone.  It certainly won’t make your writing better, easier to read or more interesting, though it may lead you down different research paths.  It’s like making original music.  Some musicians are great and distinguished by their customized tools.  Others are amazing with off-the shelf technology.

The other thing that struck me about Turkel’s page is his suggestion to write in the smallest increments possible.  As is clear from the above, I prefer to write in the largest increments possible.  My best thinking and revising happens when I have several thousand words together in hand and can begin to actually behold the relations that are only latent in early drafts.  I also spend a lot of time talking with people about my work and trying out ideas in public presentations before publishing (again the opposite of his approach).  I would never dream of doing a book in secret because I can’t imagine what advantage it would give me as a thinker or a writer.  Again, different strokes for different folks, but don’t assume your working practice needs to fit a tool your are handed or told to like.

Bibliographic software has been another challenge.   None of it is that great, but to me it’s so much better than nothing (Carrie still types her bibs manually, so that does date us).  I tried to love Zotero because of its politics but it has several problems: 1) you can’t edit output styles very easily, a must for someone in my position where every publication I submit to seems to have its own idiosyncratic bibliographic style; 2) the web functionality is great except pretty much every book that gets sucked in from Worldcat or whatever has its information in wrong–semicolons where they don’t belong, translators in the author field, etc.  It takes me less time to type a new entry than to correct one I’ve gotten off the web; 3) tech support is quite limited in my experience; and 4) I HATE the social networking function.   I don’t want a public bibliography.  I want to quietly go about my business as a scholar and not have to worry about maintaining or sharing or managing yet another persona online.

Endnote, meanwhile, is ugly, clumsy and performs poorly on very large documents (and frequently seizes up in cite while you write, which I have turned off).  BUT Endnote has the following things going for it from my perspective: 1) same (low) quality of importing as Zotero; 2) easy to edit output styles; 3) quite decent phone technical support for when I’m customizing; 4) lots of web functionality if I want it, which I don’t.  Also I’ve used it for awhile so I have a big bibliography in there and lots of hours logged with its quirky workflow.

As with music or cooking, too much attention to the tools at the wrong time can take you away from the task at hand.  Even if it’s imperfect, settling on something that works for you (and that you don’t have to attend to much) for the duration of a project–or many projects–is better than switching frequently, especially mid-project.  There’s this thing in the digital humanities where people come across tools and say “look, you can _______.”  That’s like the logic of consumer electronics, with each new gadget trumpeting “new, enhanced functionality.”  Sure, you can.  But the question is what you actually do.  (And that is a subject for another post entirely.)

PS — Tara posted some other great links in the thread.  Miriam Posner’s “Managing Research Assets” has lots of suggestions and this HASTAC thread is also a good read.

Narrative: Can’t Live With It. . .

Last week in my Historiography seminar I taught Hayden White’s classic 1966 The Burden of History.  This is the third time I’ve taught him (last two times were in 2000 and 2004) and each time, certain aspects of his argument seem fresh, others seem dated.  What’s wonderful is that those labels change each time.

Here he is writing about “methodological and stylistic cosmopolitanism” in history-writing (131):

Such a conception of historical inquiry and representation would open up the possibility of using contemporary scientific and artistic insights in history without leading to radical relativism and the assimilation of history to propaganda, or to that fatal monism which has always heretofore resulted from attempts to wed history and science. It would permit the plunder of psychoanalysis, cybernetics, game theory, and the rest without forcing the historian to treat the metaphors thus confiscated from them as inherent in the data under analysis, as he is forced to do when he works under the demand for an impossibly comprehensive objectivity. And it would permit historians to conceive of the possibility of using impressionistic, expressionistic, surrealistic, and (perhaps) even actionist modes of representation for dramatizing the significance of data which they have uncovered but which, all too frequently, they are prohibited from seriously contemplating as evidence. If historians of our generation were willing to participate actively in the general intellectual and artistic life of our time, the worth of history would not have to be defended in the timid and ambivalent ways that it is now done. The methodological ambiguity of history offers opportunities for creative comment on past and present that no other discipline enjoys. If historians were to seize the opportunities thus offered, they might in time convince their colleagues in other fields of intellectual and expressive endeavor of the falsity of Nietzsche’s claim that history is “a costly and superfluous luxury of the under- standing.”

What struck me about the quote then, as now (as I embark on the very narrative readings for this week), is that while we can say that historians got much more adventurous with types of source material, and much more connected with modern social science, mainstream historiography has not–at least not in the work read by this media historian–taken up White’s call for new modes of representation.  Most histories I read are still written in a narrative form, granted with adventure and variety, but I would not say we get a lot of  impressionistic, expressionistic, surrealistic, and (perhaps) even actionist modes of representation.  Sure, there are singular exceptions within history, like Carolyn Steedman, and outside it, like Michel Foucault or Friedrich Kittler.  Writers influenced by poststructuralism took a more ironic stance with respect to their sources.  But now, as then, the default mode, the accessible mode of history writing is the narrative mode.

Then again, it’s hard not to read this passage against the explosion of work in the digital humanities.  The difference is that that history may or may not turn out to be “written” in the same sense that the histories criticized by White were written.

Winter Break Reading

Looking back over the last six months of so, I have spent relatively little time reading completed books that weren’t either for teaching or directly related to something I was writing, or for something like a review of a tenure dossier.  Of course, I read lots of incompleted books: dissertations, drafts of books, books being reviewed for publication, etc.

In contrast, a little over two weeks of winter break afforded me lots of time to read and so I indulged in the following texts:

Michelle Murphy, Seizing the Means of Reproduction

Jaron Lanier, You Are Not a Gadget 

John Harwood, The Interface: IBM and the Transformation of Corporate Design, 1945-1976

Joanna Demers, Listening Through the Noise: The Aesthetics of Experimental Electronic Music 

N. Katherine Hayles, How We Think: Digital Media and Contemporary Technogenesis

Melissa Gregg, Work’s Intimacy

Alex Galloway, The Interface Effect

I’m jumping around in Richard John’s Network Nation but can’t say I’m done with it yet.

and of course a pile of articles.

In 2013, I plan to do better on the finished book front.

2013 Academic Plans

1.  Refuse requests for “to spec” writing.  I did five essays like that in 2012 and while each one was a pleasant diversion from my main research agenda, the sum total–alongside grant applications–have taken me too far from the work sitting on my hard drive waiting to get out.  I won’t declare bankruptcy on existing obligations but won’t add to them for awhile.  If a request for a project comes in and there happens to be something cooking on my hard drive that’s a match, then we’re golden.

2.  In the first instance, use the space opened up by these refusals to get my own new stuff out as I’ve been meaning to.  I’ve got a series of signal processing pieces that need to be finished shown into the daylight.  I’ve also got a stack of research on impairment phenomenology waiting to be read and written up.  I’ve put in a massive grant application for a project on instrumentality.

3. Read more finished books that aren’t directly related to a paper I’m writing or a class I’m teaching.  This category excludes books that are under review or come to me as part of tenure and promotion dossiers.  Too much of my current reading is instrumental in nature.  Articles are good too but I’m better about that in general.

4. Spend more time talking about people with that stuff, enjoying colleagues around town, students, etc.

5.  Write more for nonacademic publications, other’s people’s blogs, op-eds, etc.

6.  Continue to travel for presentations a little less.  Avoid heroic travel at all costs.