Yet Another Layton Obit (upon reading today’s Globe & Mail)

When I arrived in Canada in 2004, I was blown away by the mere existence of Jack Layton and Gilles Duceppe. Here were two politicians who spoke the truth, had roots in labor and consistently and unambiguously promoted left values. Layton was of course frequently red-baited by both the liberals and press (a fact conveniently forgotten in the last week), but he never took the bait.* I recall an interview last spring where a reporter asked Layton if he thought his ties to labor would hurt him on some “business” issue. He said “no, next question.”

Even following the conservatives and liberals in 2004 it was instantly clear how different Canadian politics were from those of the US. But for this immigrant, Layton — and moreso than Duceppe because he proved to be able to reach people from different parts of Canada — was the real beacon of what is and could be different here.

* there must be a word for sovereigntist-baiting, which is what Nycole Turmel is going through right now

The Quintessential Los Angeles Experience

For our last lunch in LA, we go find the Korean Taco Truck. It’s delicious and not the kind of thing you’d find anywhere else. While we’re eating the Quizno’s across the street calls the police, who come and chase them away. Then some enraged lady accosts us ranting about how “we have rights as consumers” to a superior product (Korean tacos > Quizno’s subs) and we need to organize and start a movement in the city. Meanwhile, I’m sure someone in the truck has already tweeted their new location as they are driving away.

That’s right, a people’s uprising for Korean tacos. From a truck.

Logics of the Humanities and Online Long Form Argument

Thursday’s talk at the NEH/Vectors institute by Alex Juhasz raised some interesting questions for me about online argument. Humanistic thought has, for hundreds of years, operated in relatively linear and long-form arguments. Even those works that challenge linearity (I am thinking here of the standard stable of poststructuralist critiques) exists with reference to it, and still demands long stretches of attention, close reading and easy cross-referencing. Although we normally think of books and journals as things you read from front to back, the codex form is quite amenable to the glitchy rhythms of close reading–starting, stopping, moving around, cross-referencing.

Juhasz’s book, Learning from YouTube, is an experiment in online long-form scholarship. Since she’s an artist, she is less committed to standard humanistic argument than someone like me might be. Her arguments are presented performatively, as immanent critique of YouTube within YouTube, produced by her, her students and a fleet of interlocutors. The story of the publication — including having to rewrite the entire boilerplate publication contract — is interesting in itself.

For me, her decisions about form were of greatest interest. There is no standard for long-form online argument, which means no set of conventions, no imaginary audience that can be presumed, and no sedimentation of past practice to rely upon. Consider all the standard practices that the writer of a book or journal article can fall back upon: stylistic parameters; a universe of possible intertextual references; formatting conventions; expectations of readers; classroom pedagogies that teach students how to read the arguments; citational styles; storage and preservation protocols; a clear sense of niche into which the work will fit (textbook, theory piece, monograph, etc); and the writer’s long period of apprenticeship in the form prior to publication (lots of seminar papers, dissertation, etc).

Limits and conventions are incredibly productive in both intellectual and creative undertakings. I asked Alex about this, and she said she effectively had to impose her own limits. I’m not sure I still understand what those decisions were — clearly they all orbit around immanent critique of YouTube from within YouTube, a certain participatory aesthetic and a refusal of a certain polished style.

This blog post by Elizabeth Cornell nicely explains Scalar, one of the publishing platforms at our Institute. As she points out, it exists within a universe of academic and paracademic publication, in contrast to blogs, websites, wikis, etc. There are also a host of experiments in sustained online argument, though most of these take one of two forms:

1) the open-access journal, which is modeled on older scholarly forms but is free distributed online (The International Journal of Communication is a great example of this as are its predecessors like Postmodern Culture, Culture Machine and First Monday).
2) serialized online publications that are not journals or books, but sustain intellectual exchange both within and adjacent to the academy. Bad Subjects started as a ‘zine and then expanded that format online. It was collectively produced, and we had themed issues that drew contributions from both academics and nonacademics. FlowTV has the immediacy of a ‘zine but is firmly rooted within the world of TV studies, but its articles adhere to a shorter, commentary form. This aligns well with a certain kind of mid-century American style journal article, but that style has more or less been forgotten, and today isn’t recognized as a “journal article” in the same way an IJoC publication might be. (We often forget that the journal article style has changed a good deal over the last century.)

Scalar is attempting to connect the more dynamic aspects of online publication with the long form of journals and books. As drafts of my classmates’ projects start rolling out, I see that the platform works very well for two kinds of scholarship in particular: an annotated online archive, built from someone’s research; and close humanistic readings of audiovisual texts. At least they “make sense” to me, though it will be interesting to see what I think once they are completed and can be “read through” like other published work.

But the platform is still very much in beta (maybe alpha, even, but I’m impressed to say that it has never crashed on me), which means we are making stuff up, from citation practices (which I mentioned in the last post) to our imagined audiences.

To be fair, many of our assumptions about print are probably wrong as well. We know relatively little about our audiences and their reading practices. We buy many more books than we can read closely (I certainly do). But even those wrong assumptions are enabling in that once we learn to inhabit a form (whether or not we master it), we can forget about it except when we want to play with it. Right now, the challenge with Scalar–as with all digital publishing that is not simply an imitation of a book or a journal article–is that we are making it up as we go along. Alex made up her own limits, and my own struggle is finding more limits to impose on myself.

Born in Word vs. “Born Digital”

There’s a lot of talk in Digital Humanities about “born digital” content. But what we are really talking about at this point is different kinds of digital, and the politics of competing standards.

My project for this institute is to create a digital companion for my mp3 book. The purpose of the site is to present some core concepts from the book in alternative form, and in some cases to audify them with sonic examples that would be impossible to provide in print. And if we can get the scripts right, to do some fun things with the web. This, of course, involves porting some of the book’s prose into an online environment and reworking it for the more modular form of the web.

Here we run into a standards and path dependency problem (the very two concepts for which I was starting to build pages today): the writing platforms on a computer aren’t truly interoperable. For my entire academic career, I’ve used word processors. By any measure, all of my scholarship has been “born digital” — but the kind of digital matters. I wrote MP3 in Microsoft Word. Microsoft Word has some kind of insane text encoding that makes it impossible to paste formatted text into a WYSIWYG editor. Pasting it as plain text into an html window works, but then there’s the process of marking it up.

At the same time, the platform we’re working on, Scalar, is new to me and under development at the same time. This is an unavoidable situation — and Scalar has many advantages over WordPress. It’s brilliance comes from the lack of presupposed hierarchy that one finds in WordPress, and also the ability to mark up the media one comments upon in real time. I can immediately see advantages. Concepts that show up across chapters in the book can be directly linked via tags in the web version. But at the same time, it means that every time I can’t get it to work the way I want, I have to figure out if it’s simply a matter of learning a new workflow (“user error”), whether I don’t get what they’re trying to do (“it’s not a bug, it’s a feature”) or whether the software needs further development (which is why we’re here).

For instance, right now footnotes (which I looove) are a somewhat laborious process. You have to decide how you want to tag or indicate them — for now I’m following Alex Juhasz’s approach and using a superscripted “[cit]” — then format that indication, then write the note, then return to the document I was working on. I’m hoping I can convince the designers to set up a script to do this more efficiently.

Writing in the digital humanities is like that. Part of the issue is the lack of existing interoperable standards between different writing platforms. And I’d love to say that open standards would be the solution–I am a believer in the gospel of open source and open access–but the reality is that you also need interoperability, and sometimes proprietary formats exert path-dependence effects: MS Word and Adobe’s .pdf format are great examples of that. Just try being an academic and refusing to use those two standards.

Part of the issue is that you are creating the tool as you’re using it, and making up the conventions as you go along. The frustrating part is something electronic musicians know very well: when you’re learning the interface, you’re not making music.

Audio in Digital Humanities Authorship: A Roadmap (version 0.5)

Background:

0.1. Existing digital humanities work has largely reproduced visualist biases in the humanities: work with images and audiovisual texts has been thus far assumed to be more primary than work with sound as a text. And yet, sound is one of the major areas where huge gains could be made in digital publication. As with audiovisual texts (film, tv, etc), “born digital” publication allows commentary to be situated alongside or directly within media that unfold over time. Moreover, sound scholarship stands the most to gain from born digital publication. While film and television scholars have long been able to summarize the developments in a scene (whether formal, textual or narrative) in prose because of the available commonplace abstractions in visual language, this road has been much more difficult in sonic fields, whether we are talking about music, speech, architecture, art, geography or other areas.

0.2. With flash and html5, there is a mess of audio standards online. There is no obvious single direction. The purpose of this document is to elaborate some possible principles that might be of use.

0.3. This document make specific reference to my resources and needs but those will eventually be edited out. It makes reference to Scalar because that’s the environment I’m working in.

0.4. This is a work in progress. Comments are welcome and will be credited and incorporated.

Assumptions:

0.5 The purpose of digital humanities scholarship is analysis and criticism of (or in) forms other than print on a page. This may include “production” but it doesn’t have to — it could be as simple as written commentary accompanying sound files on a page. For our purposes we will just deal with the “in” and not worry about the “of” (which is more about the design of research tools).

0.6 With the right ethic and an activist infrastructure, digital humanities platforms have the potential to offer authors unprecedented opportunities to exercise their fair use rights, specifically in the close analysis, criticism and transformation of audio recordings for the purpose of advancing knowledge.

0.7. Such a project requires that the author have control over content when necessary but otherwise technical concerns should not get the author out of the creative flow of writing/producing (the model here is blogging engines like WordPress).

0.8. Similarly, for readers or listeners, audio should be as seamless a part of the experience as possible. This is a little more complicated than images or video in humanistic scholarship. Although the web is an audiovisual medium, the imaginary seamlessness of image and sound in, e.g., radio and television or the subservience of visual to auditory process in, e.g., telephony does not exist. This is true for computers as sound media in general: people tend to turn off the audio cues in programs where they even exist (most famously in Microsoft Word). If a website makes a noise when you arrive it is often considered rude. I would guess that the most common reaction to unexpected sound online is to turn it off. Websites more often function as containers for sound files that happen at the will of the user. This is how sites like Soundcloud and Bandcamp work. Violation of the rule is only acceptable in context, as in the old myspace page where it was expected that music might begin to play upon arrival.

Therefore, a website with sound ought to announce itself as such ahead of time. Old flash-based sites for bands did this very well, where if they did have a soundtrack that came on automatically, there was usually a visible “off” or “stop sound” button.

Basic Functions:

1.1. Ideally, an audio player for a digital humanities publication should be audible across platforms, whether we are talking about computers and operating systems or mobile devices.

1.2. Because of competing web standards (especially the flash/html5 mess) a single format won’t be enough. Although mp3 is the most common audio format in the world, it won’t work without flash in Firefox. Similarly, Safari won’t play ogg files, and I suspect that Internet Explorer won’t either. In some cases, a lossless file may be needed, and right now it appears I don’t think there is a widely used player for .flac or alac files, which means .wav (and hoping the end user has a fast connection) is the best answer here.

1.3. The player should only translate formats when necessary and with minimal damage.

For example: Bandcamp does this well, requiring the musician to upload a lossless or .wav format file which it then encodes through its own algorithms into a wide range of possible formats.

This is great because it allows end users to either go with an acceptable sounding default format (realistically for now, this would be mp3 though perhaps in time this should be reviewed as standards change), or to choose another more to their liking.

1.4. Listeners should be able to start, stop and move around in the file at will.

This is probably the ideal for Scalar as well, since (eg) transcoding an mp3 into an ogg file is going to introduce some new artifacts that weren’t there before. Doing it a lot will make it more noticeable.

Admittedly, this may be more of an issue in terms of how browsers load and cache an audio file.

1.5. Whenever possible, audio should begin its travels to digital humanities publication as a .wav or lossless file. This will allow for maximum possibilities for editing and re-coding later on.

But lots of audio begins its life as mp3 these days (or m4a or ogg), in which case the original is already in a compressed format. The best approach here from a “preserve sound quality” standpoint (notice scarequotes) is a wrapper that allows the original format to play in the end user’s browser.

1.6. On the author’s side, the same sort of defaults ought to be in place, but they should also be defeatable. For instance, if I would like the listener to compare a wav file and an mp3 file because I’m discussing the format, the whole thing would fail if the player automatically converts the wav file to mp3 without giving me a choice.

1.7. The author should be able to loop audio, as well as set in and out points–atleast as defaults that could be defeated by a user.

1.8. An audio player ought to be able to display a still image or caption with the audio, alongside the time-based tagging that is already in Scalar.

Advanced Functions (none of these are essential but eventually someone might want them)

Author Functions:

2.1. It would be nice to have alternate tagging approaches. Soundcloud shows a waveform (a rough measure of amplitude) and allows comments as the waveform moves along. Soundcloud pitches this in terms of social media, as it is normally for listeners, not the musician, but it could easily be made useful as an authoring approach as well.

2.2. Even better would be a range of options for visualizing the audio and annotating it, such as spectral display, rhythm, etc. But this is probably not immediately necessary (I can easily drop a sound spectrogram into the text if I want to).

2.3. One could imagine a scenario where images could come and go or even videos over the course of an audio file, but at this point we are probably better off treating it as video.

2.4. The ability to set the relative volume of different clips.

2.5. The ability to choose between mono, stereo and multichannel audio. Though I don’t think we can assume listeners will have access to 5.1 sound with their computers or mobile devices, so then you’d need folddown techniques and a mess of other stuff.

Listener Functions:

2.6. Close listening: It would be outstanding if a user were able to manipulate the audio in some basic ways that mimic the nonlinearities inherent in “close reading practices:

Not only skipping around, but
a) speeding up, slowing down,
b) scrubbing,
c) marking in and out points,
d) and freezing.

All of the functions except freezing are basic random access functions; the only audio freezes I’ve heard have either involved spectral processing or granular sampling which is probably impossible to do online now. But it would be incredibly cool.

(As an author, I can do this for my audience but it’s a different thing if they can do it themselves.)

2.7. If the audio were available in different channel formats, it would be nice if the listener had a choice to compare.

(All of this sounds esoteric until someone wants to actually do an historical analysis of stereo and I know there’s at least one group of scholars working on this topic).

Advanced Topics in the Digital Humanities, Week 1

okay, surprise, we’re back.

I’m here in Los Angeles for the NEH/Vectors Institute for Advanced Topics in the Digital Humanities (and there’s something in the title about American Studies, but I’m here on Mellon money and don’t really count as an American Studies person as far as I can tell, so please forgive me that part). In comparison with my time at CASBS, about which I didn’t blog much but will at some point, the population is considerably younger, more politicized in that way of left humanists, and of course much more oriented to humanistic and interpretive modes of inquiry.

The projects fall across a wide spectrum. Some of us are trying to imagine useful digital companions for our books. In some cases, authors want to serve the communities they are studying, or reach a broader audience, facilitate conversation online, or contribute an archive or to an archive. Some of us are working with “born digital” projects that are blends of new media production and criticism. And some of us are trying to build archive or do other things that are more like the creative or productive projects one could find more frequently a couple generations back among humanists. Most of the scholars in the group identify as American Studies or Ethnic Studies, which is no surprise given the call here.

Our first week was spent introducing ourselves, learning some basic tools, and hearing talks from a variety of special guests. The talks will continue throughout the institute (I know I should be most excited for the more intellectual stuff but I am particularly itchy to solder my own synthesizer) but there will be fewer and fewer organized sessions as we turn to lab time.

There’s also been an active twitter feed, and part of the week for me is continuing to negotiate my relationship to twitter and tweeting. The feed was mostly descriptive, but I started trying to post somewhat critical comments when statements were made that I found questionable. I’m not sure it worked. I keep expecting to love Twitter since my favorite part of Facebook is the status update, but somehow I find brevity difficult, as evidenced by the length of my posts here.

But it’ll be an ongoing experiment. I’m also taking the occasion to work on my multitasking skills which are not as sharp as they could be. That is, if multitasking actually exists.

I’ll write more in my next post about the platform I’m working with and the issues I’m confronting, as well as some more general digital humanities reflections, since blogging seems to be a preferred platform for discussing this stuff.

Summer Hiatus

It’s that time of year again. I’m nearing time to move to LA, backed up with work and flush with social engagements. Maybe I’ll get inspired in LA, but if not, I’ll see you in late August.