The Browser as a Tape Delay Feedback System

Ampex reel-to-reel tape machine

Here I will be sharing the process of constructing a browser based electronic instrument and performance environment. In this section, I will outline how to build a Tape Delay Feedback System using JavaScript and the Web Audio API. This is not meant to be an introduction to using the API, but rather a demonstration of deeper concepts that it is capable of producing. You can view the project here, and the source code here.


With the development of the Web Audio API, the browser has become a powerful environment to audio units that are near universally deliverable. One can create a network of AudioNodes, to synthesize sound, trigger audio files, or process incoming audio signals. This also affords the programmer to use the large number of other resources and libraries available in web development.

The history of consumer audio production were born out of techniques researched and created in electronic music studios such as the GRM in France, WDR in Germany, the Canadian Electronic Music Laboratory, and the San Francisco Tape Music Center. Personal computing brought these capabilities into the studios and homes of aspiring artists across the globe, all while taking up exponentially less space. These techniques were realized on reel-to-reel machines, test equipment, radios, and homemade circuitry.

Programmers sought to digitally replicate these techniques, and with that came a massive consumer market for digital audio workstations, plugins, effects units, and a influx of many millions of dollars in profit.

Tape Delay Explained

In this project, I recreated a tape delay feedback system that was first introduced by Pauline Oliveros in 1964 in her Mnemonics series.

This is a particularly interesting system to build, as there lays a particular task at hand (to delay and feed back a signal on itself), but also has many variations as to how it can function. In this version, the model is this:

  • Two tape machines are placed a desired distance apart.
  • An audio signal is run into the first tape machine and recorded, and output to the master
  • The tape runs across into the second machine, bypassing the record head, but running across the playhead, and is sent to the master. This creates a delayed output of the initial signal, the amount of time the signal is delayed is determined by the distance of the two machines and the inches per second setting on the machines.
  • In addition to being sent to the master, the delayed signal can be run back into the first tape machine, and the original signal out of the first machine may be coupled back on itself creating a feedback system.

When a feedback system is wired, there must also be points in which the performer can control the gain of these signals. If no amplification control is introduced, the system will get very loud, very quick. I highly suggest reading Oliveros’ article Tape Delay Techniques for Electronic Music Composers for more information on the concept of analog tape delay and feedback. But the model looks something like this:

Browser Implementation

The web browser, which is not traditionally thought of us a platform of music making beyond hitting play on an audio stream. The Web Audio API, and slightly before that the Audio Data API, equips the browser with the capabilities of producing synthesized tones, scheduling, manipulating audio buffers (audio files), and using the computer’s global audio specs to read incoming signals.

There are a number of documented libraries that allow for greater ease of use for the Web Audio API, for this project I chose Tone.js. It is important to note that this code and tutorial is moving from the bottom up, as variables are read from the top down in JavaScript (ie. you can’t connect a signal to an amplifier if the amplifier does not exist first). I mention this because in analog electronics and synthesis, many people often deal with the sound source first, and then processing the signal from there.

First, we must allow the user to input a signal. To route the computer’s global audio settings (these can be accessed in Audio MIDI Setup in OSX), we have to declare the input, and in this case connect it to a volume component to control the amplitude incoming signal:

The machine reverb is an effect that occurs when the original input signal is routed directly to the master, and the recorded signal of the first tape machine is also routed to the master. This creates a very short delay effect because of the distance between the record and playback heads of the machine.

For example, the Ampex 350 tape machine’s record and playback heads are about two inches apart, with the tape running at 7.5 ips, the delay is 266 ms, with it running at 15 ips, the delay is 133 ms. You can write that effect here:

The next step is to replicate the tape moving from one machine to the next. The leftChannel uses a Panner node to pan the signal hard left, it also connects to the Delay node, and is routed to the master output. The delayed signal is run to the right channel for master output (the right channel also uses a Panner node to pan hard right), and also routed back to the first machine through the tapeDelayL2Amp variable. This is our gain control of the feedback signal being cross coupled back into machine one:

Because the delayed signal re-enters the first machine, it needs to be panned left and delayed, again. This is the portion of the code where some trickery needs to happen. Because you cannot just plug an output straight into to an input and expect a result, I wrote a “phantom left channel” that still routes the signal appropriately, but on an entirely different audio path. So while we hear a left and right channel in our speakers, the signal is being routed out from three different paths (two left channels and one right).

The signal is then processed in a similar manner, with an added amplitude control that can monitor the signal that is feeding back from the delayed right channel, back to the left channel in machine one:

To create a UI for controlling amplification levels, one can use basic HTML range sliders and assign them to the Tone.js nodes accordingly. I used the NexusUI library, but either way, assigning control value is the same concept. We must build sliders for input volume, cross couple control, and R to L control; these are all connected to the variables with “Amp” somewhere in their name. For user input, we created the userAmp object. Let’s create a Nexus slider and assign accordingly:

The important part here is that the userAmp value is controlled with some type of onChange function. For the other two controls, it is the same process:

Finally, to build a simple toggle to open and close the input signal, one can use the Nexus Toggle:

All of these components are then placed in the HTML with the appropriate id tags, and they will appear in the browser window. For example:


With this instance, the browser turns into a complete performance environment. Oliveros recorded numerous works with this system, and other composers followed suit in their own ways. A large number of widely used consumer effects were born from this concept. What used to require a massive haul of audio gear, now lives in the computer, but what is unique is that this is a universal deliverable to nearly any device that runs a browser (implementing for mobile takes additional steps not covered here). There are no issues with compatibility in regards to operating systems, and with proper instructions of use and a thoughtful UI, requires no previous knowledge of audio software. One can write a network of several effects processors, or access new variations of these networks via URL. Or build and run locally, with the web browser acting as the delivering vessel in which this vast potential is housed.

In future posts, I will go go through the process of constructing the rest of this app, the audio sources, recording device, and midi implementation using the Web MIDI API.