I don’t often have the opportunity to play with the latest browser technologies in client projects, unless they’re quick prototypes. When Artangel approached me about a project bringing Paul Pfeiffer’s work to the Space I realised that it would be best done using the Web Audio API. At the time, the Web Audio API was so new that the day I began building the prototype Mozilla shipped a way of inspecting the audio nodes in Firefox Aurora for the first time.

Paul Pfeiffer’s work is based around footage from the 1966 World Cup match between England and Germany from which most of the players were algorithmically removed — done in cooperation with machine vision expert, Brian Fulkerson. Jerusalem isn’t the first piece in which Paul Pfeiffer erases elements from iconic imagery, but it’s the first to be presented exclusively online.

Jerusalem blends the archival, reworked footage with audio and video clips that bring in context from 1966 and link it to other themes present in Paul’s work.

In technical terms, it’s a media player built in JavaScript that plays the main track forever, and allows playback of additional audio and video clips over the top.

When I began working on the first prototype there were a few ideas about when the audio tracks should become available to play. I decided to implement simple rulesets which could be combined together to express complex relationships between each track and the main video. For example, some tracks might appear only on the second play through, but only to 40% of viewers, as long as the viewer has already heard at least two of the other tracks. This was intended to give the work a bit of unpredictability and make it seem a little different every time it’s viewed.

Rules were expressed as data attributes on each additional audio and video track, meaning that combining or changing them was trivial to do. It also meant that adding new types of rules was straightforward.

As the work began taking shape most of my initial assumptions about rules were abandoned, as we realised that the experience seemed too confusing. Instead, we decided to present the viewer with an interface to play each additional audio track after thirty seconds. To help understand what the interface represents, some of the audio tracks play for a set duration at certain times, highlighting the UI elements - little previews if you like. Again, that information is communicated through rules in the markup. Here’s an example:

<audio id="bees" preload="auto" data-offset="30" data-preview-length="19" data-on-loops="1">
  <!-- sources -->

This track plays 30 seconds into the main track, lasts 19 seconds and only appears on the first play of the main track.

There are also video interruptions, over which the viewer has no control. These are timed using information specified in the markup.

Specifying rules this way turned out to be very flexible and accommodated many changes that were made since the initial prototype. Despite throwing out a bunch of sample rules I dreamed up at the beginning, the structure for defining and applying them made it easy to expand and develop the work.

Very light browser feature detection is being done, mainly to check for the presence of HTML5 audio and video support. If it’s not possible to play media natively then the work cannot be viewed. If it is, then depending on the Web Audio API availability the viewer either has a basic or slightly richer experience.

In the basic version the audio tracks are crossfaded using just the volume property on the audio elements. When the Web Audio API is available, then a subtle low pass filter is added during the crossfade and to the main track, which is always audible in the background, making it seem a little muffled.

AudioParam, part of Web Audio API, allows you to do a bunch of really cool things: you can ramp the value of an effect to a specific value at a specific time in multiple ways (linear or exponential) for example, which allows fine-grained control over sound. Though Jerusalem is relatively simple when it comes to audio manipulation, I can see how powerful the new features would be in the hands of someone who really knew what they were doing.

I don’t know very much about audio, so it’s hard for me to imagine just what is possible with the new browser capabilities, but Chris Lowis showcases interesting projects and new Web Audio API features in his weekly newsletter.