Interview: Shawn Greenlee, Professor at RISD
Shawn Greenlee is a professor and performer, with a big ambition to take spatial audio into its next phase. Shawn teaches at the famed Rhode Island School of Design (RISD or “riz-dee” as it’s usually called), where he has helped bring sound and spatial audio into the fold of an institution known more for visual arts, design, and architecture.
I first met Shawn Greenlee when he invited me to do a DIY controller workshop where we did an intensive 2 day workshop to teach students the principles of sensors, MIDI, parametric control, and hardware. I recently reconnected with him at the NIME conference, where I found that the Sensel Morph was at the center of his performance there, and he had been experimenting with different ways of controlling audio and distributing it among large speaker arrays.
It was clear that he was really pushing the Morph way beyond the overlays, and incorporating multi-touch and pressure into really dynamic noise-scapes. As is often the case with electronic music, it wasn’t totally clear what was going on behind the scenes, so I chased him down at the beginning of his semester to ask some questions.
Let’s cut to the chase. What are you doing up there? What is your live performance setup for the piece I saw at NIME? Where else have you performed it?
With my latest performance system, which I call Quarries, the work is geared toward solo electroacoustic improvisation. The software I’ve made with Max. The primary sounds are the result of some erratic synthesis procedures that I refine between performances. There’s multiple feedback points within the synth parts of the patch, so the outcomes are often unpredictable. Additional sounds produced come from my library of personal field recordings, most recently from three weeks I spent in Alaska in April and May. These sounds get mixed in and are used to set some of the synthesis parameters.
On the physical controller side of things, the latest incarnation of Quarries features 2 Sensel Morphs, a Faderfox DJ44, and 2 Bodelein USB microscopes. The microscopes are live cameras, pointed downward at the table. I put drawings I’ve made under the scopes, scan these, and apply scanned rows of image data as waveforms for synthesis and as spatial trajectories/locations for sounds. Lately I’ve been using color information, applying color space to sound space. For instance the hue, saturation, and lightness (HSL) of pixels becomes azimuth, elevation, and distance. Another way I use these scans is taking the luminance of pixel rows and applying these as time-varying transfer functions for waveshaping. The image then becomes like a distortion effect on a signal passing through it, the image reshapes the waveform.
The recent addition of the microscopes is a return to some earlier work where I used live cameras for real-time graphic sound synthesis. Image to sound conversion has been happening in Quarries all along, but earlier it was only visible in my interface as a type of digital score. While adapting the work for multi-channel loudspeaker arrays, I decided that I wanted to bring back the physicality of using actual drawings and the camera.
The Sensel Morphs are now essential to my set-up. Right now I’m using these without overlays, (though I am using the overlays for other things). On each I’ve mapped out four regions for 14-bit MIDI XYZ. I apply these to synthesis and effects parameters. One of the regions is a quad-mixer between the effects. Pressure is mostly applied to feedback and filter settings. I really love that the Morph does 14-bit MIDI, the resolution is great. Sometimes, I’ll use toothpicks when my fingers aren’t precise enough as instruments.
I use two of the Morphs because there are two synthesis voices, each with the same set of parameters. That logic of two voices carries over to the Faderfox controller. I use it mostly as the layout suggests - like a DJ mixer, though with idiosyncrasies.
I’ve been performing with versions of this particular system since 2016, adding and subtracting different elements. At times Eurorack modules and live sound sources have been incorporated.
Some additional places I’ve performed the work include Cube Fest 2016 at Virginia Tech, BEAST FeAST 2017 at University of Birmingham in the UK, and ICMC 2018 in Daegu, S. Korea - as well as venues and clubs in and around Providence.
When I first met you at the workshop, you were just starting to introduce multi-speaker, spatial audio into select classes. Now you have progressed to having a dedicated, multi-channel, multi-speaker system called The Spatial Audio Studio in downtown Providence. RISD is not really known as a music or sound production school - it’s more concerned with design and the tangible world. Why is sound, and spatial audio in particular, an important part of “design?” How does the invisible world of “air pressure modulation” fit into a curriculum that has produced architects, sculptors, and fine artists?
Our new spatial audio studio at RISD features a 25.4 loudspeaker array within a purpose built, acoustically isolated room. This allows students to effectively work with a variety of spatial audio techniques, including high order ambisonics.
Sound is a bridge between our majors at RISD. I’ve seen students from nearly every discipline engage sound in sophisticated ways. The sonic arts and sound design have been on the periphery of the curricula for a long time, sometimes addressed within a degree program as it pertains to a specific field: for instance, an acoustics course in Interior Architecture, a sound for screen course in Film/Animation/Video, or a Furniture Design course that partnered with Steinway Pianos. Other times, courses have been available as electives, such as offerings in our Digital+Media Department and Liberal Arts Division. RISD also has a great relationship with Brown University, and students interested in sound have cross-registered for courses in Brown’s Music Department for years. RISD faculty, staff, and students are active in the Providence music community. There is a significant history of musicians and bands emerging from RISD. Most known are the Talking Heads, but later groups like Les Savy Fav and Lightning Bolt also come to mind, as well as musicians like Pat Mahoney of LCD Soundsystem, Phil Puleo of Swans… the list could get quite long.
I think this momentum has brought RISD to the present moment, where we’ve created this new, forward-thinking, sound studio, and have been building up the sonic arts curriculum, specifically in our undergraduate concentration Computation, Technology, and Culture and our graduate MFA degree in Digital+Media.
You asked why is sound an important part of design. I’ll flip that question. How could sound not be important for design? I hear examples every day of choices made in design and architecture where sound was not considered as an important aspect and likely causes negative effects. In fine arts, many students position their work within contemporary practices which can be mixed and multimedia, sometimes installation, sometimes screen-based. Once more than the visual is addressed in their work, the need for education in sonic arts becomes clear.
Most people are probably familiar with surround sound in their homes or in theaters, but you are working with is very different from a 5.1 system that someone might have at home. On top of that, the ubiquity of earbuds and headphones has really put the “hi-fi” experience into the “particular old man” category. How do students take the specialized experience that you teach into the broader experience of daily life? I’m trying to avoid the word “consumer” here.
In terms of spatial audio, one of the reasons for growing interest is the latest wave in immersive media (360-degree video, virtual/mixed/augmented reality). Sound design and composition can be equal to the visual in creating these immersive experiences. One of the reasons I stress the need to work with multi-channel loudspeaker arrays and not only headphones, is that the experience of sound is not only through the ears. There is a tactility to sonic experience, those air pressure waves that hit your skin and get to your bones. Headphone listening also cuts one off from their acoustic environment. While sometimes headphones are necessary or preferable, loudspeaker listening blends with the sounds of a place. That’s an opportunity.
We don’t know exactly what the future holds for spatial audio technologies. At some future point what seems now novel in spatial audio could become ubiquitous. In some industries the approach to “surround sound” has already expanded quite a bit, as in cinema, theater, and gaming. Eric Lyon from Virginia Tech has a great paper called the “The Future of Spatial Computer Music” where he suggests music specifically composed for high density loudspeaker arrays is still in an early phase and that there is much room for research and innovation. In the computer music field, 8-channel sound systems are fairly standard. Because of this, there are many ideas and techniques to draw from in composing for larger arrays, but there is also plenty of ground for experimentation and new ideas.
Beyond the esoteric, philosophical, and aesthetic challenges and implications of spatial audio, what are some of the technical challenges and requirements?
For sure, there are lots of challenges. One of the problems is how to compose or design for spatial audio when you don’t have access to a multi-channel speaker array. Fortunately one can use spatial audio techniques such as ambisonics with headphone listening. This involves decoding the ambisonic channels for binaural using HRTFs --- adjusting for how sound moves around a human head. Mileage varies, but that approach can be useful when all you have is your laptop and you’re trying to estimate how something will work on more than 2 speakers.
In terms of challenges for setting up a loudspeaker array, there are considerations like deciding on acoustic treatment for the space, the type and number of loudspeakers, and requirements for mounting, cabling, powering. Then, how to get your sound to the loudspeakers - choices to make about audio interfaces and software. With a large array, networked audio standards like Dante and AVB have made set-up and management much easier, and flexible. We’re using the AVB standard, available with MOTU interfaces. These give us plenty of I/O, and the ability to reroute signals in a web browser. With software, there are some considerations. For instance, making sure that you know what the maximum number of available output channels can be, so you are able to address all the speakers in your array. Another software consideration is what spatial audio libraries or plugins are available for your preferred DAW or programming language.
We’ve already touched on the fun parts of your work. I’m guessing that there’s a big challenge to get new ideas accepted in an institution with a long history and legacy, even one that is dedicated to creativity. I don’t want to lead you into casting shade on RISD, but I am interested in some of the challenges and politics of leading change in institutional thinking.
A difficulty is that although one should obviously expect proposed changes to take time and be met with a review process, there can be moments when momentum is lost because other issues take precedence. It takes patience and persistence to get a new curriculum going or a new academic space up and running. There’s no way to do it unless other people agree with your idea and somehow advocate for or join in the work. Development of a proposal has to include revisions based on feedback from administrators, colleagues, students, and other stakeholders. Other initiatives will take priority and might delay things. It’s important to recognize which detours actually get you closer to what you want to achieve. Eventually you reach the point of seeking the necessary approvals, writing and making presentations for relevant committees, and if all goes well, implementing the idea.
When I started teaching full-time at RISD in 2010, I already had the idea to build up the sound curriculum and create a multi-channel studio. To me the need was evident. My office hours were jammed with students seeking advice on sound, programming, and electronics. I had great support from leadership, but I needed to demonstrate the value of this curriculum and the type of space I imagined. So, it became a question of what was possible at the time that could build toward that goal. On the curriculum side I was able to establish new courses; and a group of us eventually created a concentration (which is like a interdisciplinary minor) called Computation, Technology, and Culture. That process took a few years, but the progress was steady. For the space, there were 2 earlier versions of what is now the spatial audio studio. The first was an 8-channel array that we set-up for class sessions, but had to break down afterwards. The limitation of not being able to permanently install the speakers led to version 2, which was a 10.2 system installed in RISD’s former library. The old library had become an alternative presentation space for lectures and performances, and we were able to advocate for some acoustic treatments to the room. This space was a much more developed proof of concept. The challenge was that this venue was in high demand and served many functions beyond our use. By working there for a few years, the need for a dedicated studio became clearer and we were able to show what problems the new space could solve.
What software do you use for composing and sound design?
Currently in the spatial audio studio, for DAWs it’s mainly Reaper and Ableton Live. For audio programming: Max, Pure Data, and Supercollider. There’s also a host of plugins and libraries for spatial audio including: Envelop for Live, packages from ICST and IRCAM for Max, Ambisonic Toolkit for Supercollider and Reaper. I’m excited to see what students will do with the Sensel Morph in this space. I think the innovator overlay will get a lot of use.
In my own work, I primarily use Max and Pure Data.
How has this toolset evolved over time?
I started using Max when I began my graduate studies in computer music in fall 2001 at Brown University. People were already using Wacom digitizing tablets as an interface for Max, and I quickly adopted them in my performance practice. Coming from a visual arts background, the action of drawing as a gesture made a lot of sense to me. With the Intuos 2 tablet, you could independently track the position, tilt, and pressure of multiple pens and the pressure sensitivity was very high. The downside of the Wacom tablet was that eventually I’d break a pen and it was costly to replace. So, I started to look for older tablets that people were discarding. I scored a bunch of old 12x12” serial tablets, the Wacom UD-II, and at one point had four of these going at the same time. But with these the resolution wasn’t as good as the Intuos line; and I continued to break pens. Eventually I’d run out of them.
I then tried various trackpads. At the time, multi-touch was available with the iGesture Pad from Fingerworks. Then Apple acquired them and provided multi-touch trackpads on laptops and released the bluetooth Magic Trackpad. With the “fingerpinger” external, you could get the data from these. I’d often use two for performances. Another development I incorporated was multi-touch data from iPads, sent wirelessly over OSC. The multi-touch aspect was exciting and not relying on a stylus was a plus. But, the trouble I’ve always had with these as interfaces is the wireless aspect. Inevitably I’d lose connection in the middle of a performance. Usually because a battery shook loose or some random interference. I really needed a wired connection for robustness.
The Sensel Morph came just at the right time for me. It solves problems I had with earlier interfaces and provides new capabilities. I’m glad to be able to use the gestural vocabulary I developed over years with similar interfaces.
Below is a performance using one Morph and the iPad, before two Morphs were integrated into the setup. Beware of volume levels! This is not top 40 music, but pushing the boundaries of sound!
As a teacher, you get to see a lot of new ideas and manifestations of sound and technology design. What are some of your favorites and works that surprised you most?
There’s a lot! A few recent alums come to mind.
Rosa Sungjoo Park created an installation that involved multichannel audio where the speakers were transducers hidden within wooden sculptures arranged around a room. In order to hear the piece you had to put your ear to the wood, which in turn forced your body to move a certain way in relation to the sculpture. A sort of dance emerged as audience members moved between the pieces.
Will Urmston worked with the Web Audio API to create a piece performed by the audience using their phones. It was a nice way to think about spatial audio, with the number of speakers being variable and the speakers being small and frequency-limited.
Savannah Barkley made several immersive sound works that explored hearing and touch, one featured a sculptural cymatics piece, another explored the capacitive properties of plants. These were stunning, interactive experiences.
Cassidy Batiz made her own 8-channel cube array with 2x4s and car speakers for Sonus a performance that used ambient light sensors to determine parameters for sound processing.
Michael Moyal created a skateboard with a sleeve you would slide your phone into. Called Skate And Play, the phone was running MobMuPlat (which you program with Pd). With some sensors added as well as GPS and accelerometer data from the phone, he generated a real-time ambient soundtrack for the rider over bluetooth headphones. It was a really ambitious project and he put it all together in the span of a few weeks.