Another guest post at ProfHacker on this always-timely topic.
So I went to Amazon to pick up Constance Classen’s The Deepest Sense: A Cultural History of Touch, which I’m looking forward to reading. This is what I found:
While I’m definitely interested in picking up the book, and while it is clearly eligible for super saver shipping, this is the first over-$1500 academic paperback I’ve ever seen. Either someone made a whopper of a typo, Amazon has some kind of algorithmic error (or U of Illinois press does), or the press has a truly insane pricing plan for purchasers outside the United States.
Luckily, there are other vendors who will sell it for less.
Assuming it’s an error, and one that might be to the author’s detriment, I emailed Amazon to ask why the book was so expensive. If I hear back, I’ll post it here.
Update:
Here’s the email I got back from Amazon. Suitably cryptic:
Thank you for writing to us at Amazon.ca.
I am sorry, but this item’s price “The Deepest Sense” was listed as wrongly on our web site.
We build our web site information from many sources, and we really appreciate knowing about any errors which find their way into it.
I’ve forwarded your message to the inventory department and I can ensure that this error is corrected as soon as possible. This process will takes 5 to 7 business days, so we request you to wait until that time to get this issue corrected.
I will write back to you within 5 to 7 business days with a resolution.
Thank you for your patience and understanding, and thanks for shopping at Amazon.ca.
Did we answer your question?
If yes, please click here:
http://www.amazon.ca/rsvp-y?c=cycdthew3542761760If not, please click here:
http://www.amazon.ca/rsvp-n?c=cycdthew3542761760&q=caffPlease note: this e-mail was sent from an address that cannot accept incoming e-mail.
To contact us about an unrelated issue, please visit the Help section of our web site.
Best regards,
Vel S.
Amazon.ca
Your feedback is helping us build Earth’s Most Customer-Centric Company.
http://www.amazon.ca
Postscript:
On Facebook, Dave Noon pointed me to this $23 million book about flies.
Yesterday’s New York Times caught up with a story that’s been making the rounds of the internet music circles since Zoe Keating published her finances about a year ago: in many cases, Spotify pays so little they might as well not be paying artists at all. Sure, artists get fractions of cents in royalties, but very few, if any, will be able to make a living off streaming audio. This should surprise nobody. The commodity status of recordings has been in question since the file-sharing boom. But getting people un-used to the idea of buying recordings does not automatically mean that the recordings will lose their commodity status. In the spotify example, recordings have moved from physical commodity to service. Of course, radio–and practices like middle music before it–already employed a service model for music. But middle music was performed by living musicians who were paid; radio had a royalties system tied to a commodity system of recordings. Spotify is what you get when you subtract recordings from radio.
In their 2006 book Digital Music Wars, Patrick Burkart and Tom McCourt predicted this outcome. They argue that the utility model–they adopt the term “celestial jukebox” from industry publications–is actually preferable to the sales-of-goods model for large segments of digital media and music industries, and that the result will be diminished returns for both musicians and for audiences:
The Jukebox may promise more innovative music, more communities of interest for consumer, and lower prices for music; in fact, however, it gives us less (music in partial or damaged or disappearing files) and takes from us more (our privacy and our fair-use rights) than the old system. . . . Rather than a garden of abundance, the Celestial Jukebox offers a metered rationing of access to tiered levels of information, knowledge and culture, based on the ability to pay repeatedly for goods that formerly could be purchased outright or copied for free (136-137).
Real reform of the music industry has to start with social questions about music, not new business models. How do we want to support musicians and music-making? What kinds of musical culture do we want to develop? Those questions should animate social thought about music, music criticism, and music policy. If you’ve never heard of music policy as a current thing, that’s because we need it right now.
When I first moved to Montreal in 2004, people would tell me they were going to “London” and I’d get excited for them because I thought they were going to the UK. Then they’d say “London, Ontario” and it would be slightly disappointing. Nine years later, last night, I’m checking in at the airport to fly to London, Ontario for the first time and the woman says to me “can I see your passport?” When I explain it’s Ontario, she said “oh, sorry” and gave me a disappointed look.
Some cosmic loop has just closed.
One of the problems with the move to digital humanities and big data is a kind of “gadget logic” taken from the advertising rhetoric of consumer electronics. Lots of the reportage around digital humanities work on the big data side of the field focuses on what computer could do that people couldn’t. By that I mean there is a “look! now you can ______” rhetoric that to my ears sounds exactly like an ad for a new consumer electronics product. “Look, now you can update your friends on what you’re doing in real time” is not all that far from “look, now you can manipulate data in this new exciting way.”
The point, as a friend put it to me recently, is not whether you can do this or that kind of analysis, but whether you do carry it out and whether it tells us anything we don’t know.
Take this piece from The Awl for instance. It’s all very cool, except for the following problems:
1. The two main examples are feats of good research, but at least as summarized, they generate no substantial new knowledge OR don’t transform ongoing debates scholars are having — which is what I look for when reading new material (though I suspect that a reading of the Lim book would suggest deeper knowledge than the article represents–I did have a look at the article). I realize the actual work may be more sophisticated than the article, but since the article is claiming that there’s a revolution in knowledge, I’d say it’s got a burden of proof to demonstrate that claim. We knew that press use of sources was biased toward men (and as a comment points out, they are also biased toward official sources put forward by PR departments and agencies, which likely means more men). We also knew that presidential rhetoric got less flowery and pedagogical over the course of the 20th century.
2. The method discussion in the journalism article does not even approach giving us something that would look like reproducible science. If it’s going to claim to be scientific, then this is a major failure. If, on the other hand, we’re doing some kind of interpretive humanities work, that’s fine, but then they can’t claim the mantle of science. They use words like “data mining” and “machine learning” and “automatic coding” but good luck trying to figure out what judgments were encoded into their software and whether you might want to make different ones if you were to do the study yourself.
I agree with the authors that
Our approach — apart from freeing scholars from more mundane tasks – allows researchers to turn their attention to higher level properties of global news content, and to begin to explore the features of what has become a vast, multi-dimensional communications system.*
But that begs two questions: will that attention to higher-level properties of a phenomenon yield greater or more profound insight? Only if scholars know how to ask better questions, which implies a greater sophistication with both theory and synthetic thought–two areas where current AI is woefully lacking. There is also the question of the degree to which scholars should be free of direct engagement with the data. Another group of people in the digital humanities seems to argue that you can’t be a real scholar anymore if you can’t code. As I suggested yesterday, that’s a silly proposition, but people should of course know how stuff works, which would include their data.
Synthetic, abstract, large-scale work requires a good thinker needs to move between scales–something Elvin Lim appears to do more effectively in the article’s representation of The Anti-Intellectual Presidency. The description makes it look more like real scholarship to me because Lim utilizes digital methods when they suit the specific question he’s asking, and utilizes other approaches when called for, and approaches the topic from multiple registers. If you’re hearing echoes of what I said yesterday about writing tools, you’d be right.
Good work comes from good questions, and not shiny new tools. In the hands of a skills craftsperson, a new tool and yield beautiful and unexpected results, and that’s ultimately what we want. But we can also sometimes get those from old tools.
See also: musicians and instruments, surgeons and their instruments, drivers and cars, cooks and kitchens.
We also need to question the value of speed as a universal good. For some things, like getting out an op-ed, speed is good. But for certain kinds of scholarship, slowness is better–the time with a topic helps you to understand it better and say smarter things. If everyone can write books and articles twice as fast as they used to–or faster–nobody will be able to read them and keep up, except for machines.
Oh, wait. That already happened.
As one of my teachers told me, the hardest thing to do after jumping through all the institutional hoops is to remember why you got into academia in the first place. But that’s also the most important thing. Much as I love talk of and experimentation with new tools and processes, I worry that some of the DH discussions slide over into old fashioned commodity fetishism, and loses track of the purpose of the work in the first place. As my friend said, it’s not whether you can that matters in scholarship. It’s what you do.
—
*I disagree with the authors’ assertion that the modern media system is more complex and multilayered than earlier systems. Have a look at Richard Menke’s Critical Inquiry essay on the Garfield assassination (which I just taught last week). Sure, it’s different, but from a perspective of cultural analysis there is every bit as much complexity (perhaps more since less it automated and systematized) though people walking around and writing on giant bulletin boards isn’t really well enough archived to be data mined.
A Facebook conversation started by Tara Rodgers got me thinking about the tools we use as writers. Obviously, the creative process is highly idiosyncratic and personal at some level even though none of us are as unique as we think we are. Try everything; select what works. The proliferation of new tools is exciting but it can also overwhelm, or worse, tempt people to contort their working practices to fit new tools when the old ones worked just as well.
There are tons of alternatives to MS Word now, and of course also lots of alternatives for bibliographic software. There are also programs like Evernote and Scrivener which academics find incredibly useful. My time with Scrivener was a disaster. Part of it was impatience. I’ve been using Word since sometime in the early 1990s (how long? Before Word, I wrote my BA thesis in the Leading Edge Word Processor, which lacked footnote functionality). So the idea of trying to write while learning the quirks of a new program would have to lead to a massive payoff for me. For me, writing is at its most essential a creative and synthetic process. For me, the goal of writing technology is therefore to get into the “zone” as fast as possible (where I’m thinking about the writing and not the tool) and that means a high level of instrumental comfort with the tools. The organizational powers of something like Scrivener are not only unnecessary for me, I think they get in the way of my practice as a writer.
That’s not to say Word is without its problems. I think I mostly manage it by ignoring most of its functions and compartmentalizing the interface as much as possible. It’s customized for nobody, instead manifesting itself as distended bloatware. After not bothering to look and discovering that the Audible Past clocked in over 160,000 words right before final submission, I foolishly wrote MP3 as a single file, thinking it would help me keep track of the text’s size. It didn’t make it any smaller, and Word did not like a 140,000 word file (I eventually got back down around 120,000 words). Neither did Endnote, my bibliographic software.
A lot of the extant writing advice online seems to advocate more teching up. In Tara’s conversation, Jentery Sayers offered a link to some “how to” advice from Bill Turkel. The advice about backup and digitizing as much as possible is absolutely great. I use Apple’s Time Machine, which isn’t an awesome backup but it’s sufficient. Every few months I take an extra hard drive, clone the time machine drive, and take it to school. I also use Dropbox (any cloud service is fine) and pay for extra storage space. My music collection isn’t up there, but there’s lots of room for pretty much everything I write as well as the large set of “incidental writings” we do as academics ranging from course materials to letters of rec. Basically, everything that goes in a “Documents” folder goes in Dropbox instead, which is organized inside as a “documents” folder would be–by type of project/task and by date.
Audible Past was a file cabinet full of stuff I had to reorganize on my floor, over and over again. In grad school I had nightmares about house fires from time to time, and a friend even gave me a metal lockbox with a copy of his dissertation field notes. MP3 was a few folders, plus a labyrinth of digital files, many of which were searchable. I still printed things out to mark them up (like the oral historical interviews) but the physical copies were incidental to the process. The archival copy is digital, and backed up in multiple ways.
That said, I’m not convinced like Turkel is that everyone and her sister needs to know how to program to do research. A lot of what we do isn’t about making things but rather interpreting, synthesizing, and representing. Programming can help with that but it is not the only or necessarily the best path for everyone. It certainly won’t make your writing better, easier to read or more interesting, though it may lead you down different research paths. It’s like making original music. Some musicians are great and distinguished by their customized tools. Others are amazing with off-the shelf technology.
The other thing that struck me about Turkel’s page is his suggestion to write in the smallest increments possible. As is clear from the above, I prefer to write in the largest increments possible. My best thinking and revising happens when I have several thousand words together in hand and can begin to actually behold the relations that are only latent in early drafts. I also spend a lot of time talking with people about my work and trying out ideas in public presentations before publishing (again the opposite of his approach). I would never dream of doing a book in secret because I can’t imagine what advantage it would give me as a thinker or a writer. Again, different strokes for different folks, but don’t assume your working practice needs to fit a tool your are handed or told to like.
Bibliographic software has been another challenge. None of it is that great, but to me it’s so much better than nothing (Carrie still types her bibs manually, so that does date us). I tried to love Zotero because of its politics but it has several problems: 1) you can’t edit output styles very easily, a must for someone in my position where every publication I submit to seems to have its own idiosyncratic bibliographic style; 2) the web functionality is great except pretty much every book that gets sucked in from Worldcat or whatever has its information in wrong–semicolons where they don’t belong, translators in the author field, etc. It takes me less time to type a new entry than to correct one I’ve gotten off the web; 3) tech support is quite limited in my experience; and 4) I HATE the social networking function. I don’t want a public bibliography. I want to quietly go about my business as a scholar and not have to worry about maintaining or sharing or managing yet another persona online.
Endnote, meanwhile, is ugly, clumsy and performs poorly on very large documents (and frequently seizes up in cite while you write, which I have turned off). BUT Endnote has the following things going for it from my perspective: 1) same (low) quality of importing as Zotero; 2) easy to edit output styles; 3) quite decent phone technical support for when I’m customizing; 4) lots of web functionality if I want it, which I don’t. Also I’ve used it for awhile so I have a big bibliography in there and lots of hours logged with its quirky workflow.
As with music or cooking, too much attention to the tools at the wrong time can take you away from the task at hand. Even if it’s imperfect, settling on something that works for you (and that you don’t have to attend to much) for the duration of a project–or many projects–is better than switching frequently, especially mid-project. There’s this thing in the digital humanities where people come across tools and say “look, you can _______.” That’s like the logic of consumer electronics, with each new gadget trumpeting “new, enhanced functionality.” Sure, you can. But the question is what you actually do. (And that is a subject for another post entirely.)
PS — Tara posted some other great links in the thread. Miriam Posner’s “Managing Research Assets” has lots of suggestions and this HASTAC thread is also a good read.
Last week in my Historiography seminar I taught Hayden White’s classic 1966 The Burden of History. This is the third time I’ve taught him (last two times were in 2000 and 2004) and each time, certain aspects of his argument seem fresh, others seem dated. What’s wonderful is that those labels change each time.
Here he is writing about “methodological and stylistic cosmopolitanism” in history-writing (131):
Such a conception of historical inquiry and representation would open up the possibility of using contemporary scientific and artistic insights in history without leading to radical relativism and the assimilation of history to propaganda, or to that fatal monism which has always heretofore resulted from attempts to wed history and science. It would permit the plunder of psychoanalysis, cybernetics, game theory, and the rest without forcing the historian to treat the metaphors thus confiscated from them as inherent in the data under analysis, as he is forced to do when he works under the demand for an impossibly comprehensive objectivity. And it would permit historians to conceive of the possibility of using impressionistic, expressionistic, surrealistic, and (perhaps) even actionist modes of representation for dramatizing the significance of data which they have uncovered but which, all too frequently, they are prohibited from seriously contemplating as evidence. If historians of our generation were willing to participate actively in the general intellectual and artistic life of our time, the worth of history would not have to be defended in the timid and ambivalent ways that it is now done. The methodological ambiguity of history offers opportunities for creative comment on past and present that no other discipline enjoys. If historians were to seize the opportunities thus offered, they might in time convince their colleagues in other fields of intellectual and expressive endeavor of the falsity of Nietzsche’s claim that history is “a costly and superfluous luxury of the under- standing.”
What struck me about the quote then, as now (as I embark on the very narrative readings for this week), is that while we can say that historians got much more adventurous with types of source material, and much more connected with modern social science, mainstream historiography has not–at least not in the work read by this media historian–taken up White’s call for new modes of representation. Most histories I read are still written in a narrative form, granted with adventure and variety, but I would not say we get a lot of impressionistic, expressionistic, surrealistic, and (perhaps) even actionist modes of representation. Sure, there are singular exceptions within history, like Carolyn Steedman, and outside it, like Michel Foucault or Friedrich Kittler. Writers influenced by poststructuralism took a more ironic stance with respect to their sources. But now, as then, the default mode, the accessible mode of history writing is the narrative mode.
Then again, it’s hard not to read this passage against the explosion of work in the digital humanities. The difference is that that history may or may not turn out to be “written” in the same sense that the histories criticized by White were written.