Sounds Wild and Broken, page 24
These transportations are achieved by playing back into the room what’s happening on stage, with subtle alterations to the sound: adding and changing the duration of reverberation, brightening or darkening the tone, and shifting the spatial origin of sound. The system works like the reflectors, baffles, and curtains of concert halls, but the reflection has passed through microphones and speakers, not bounced from wood, stone, or cloth.
The idea of electronically shaping a venue’s sound is at least seventy years old. In 1951, the reverberation and bass response of the newly built Royal Festival Hall in London were too weak. The music felt anemic, clear but lacking rich tones. Rather than gut the interior to remedy excessive sound absorption, the hall was equipped with microphones and speakers, allowing engineers to boost reverberation and low frequencies without giving an obvious sense of amplification. This “assisted resonance” system was remedial and not intended as a tool for elaborate sound design. In the late twentieth century, similar sound reinforcement systems were installed in concert halls worldwide, complementing the acoustics of rooms and doubling as amplification systems for speech or plugged-in instruments. Now, better microphones and speakers, combined with software that allows us to model and manipulate sound, make the system at National Sawdust a creative instrument in its own right.
Is such electronic shaping of sound a defiling artifice for “acoustic” instruments like cellos or flutes? Are we sullying the purity of the musical experience by adding a touch of electrical power to the sounds in a room? The New York Times music critic Anthony Tommasini writes that “natural sound has always been the glory of classical music.” He was “dismayed” by the 1999 addition of an electronic control system to the New York State Theater, then home to both the New York City Opera and New York City Ballet, writing that “a line has been crossed, and I fear the worst.” Conductor Marin Alsop, commenting in 1991 on an early version of the electronically enhanced concert space in the Silva Concert Hall in Eugene, Oregon, said that “to rely on a sound technician for your balance is completely antithetical to the role of a conductor.”
Yet all music is a product of its context. The sound of the human voice or violin that we hear in a recital hall is not an unmediated experience of vocal folds or bow on strings. Rather, the sound is partly constructed by centuries of analysis and experimentation by “technicians” with the acoustics of interior spaces. If we’re listening in a large modern concert hall, our experience is the product of hundreds of thousands of dollars of architectural artifice to bring us the sound we hear. The New York Philharmonic, for example, plays in a hall at Lincoln Center that was built in 1962, then renovated to improve acoustics half a dozen times over the next twenty-five years. A major redesign of the hall is now underway that will, in part, once again overhaul its acoustics, at a cost of more than half a billion dollars. “Natural sound” in these spaces is an expensive contrivance.
The Meyer system, and those of other companies with similar products, builds on a long-standing tradition of engineering the relationship between music and acoustic space. To be fair to skeptical late twentieth-century commentators such as Tommasini and Alsop, early versions were crude compared with what can be achieved today. In 2015, Alex Ross, the New Yorker’s music critic, wrote admiringly of the possibilities of these electronic systems and concluded that “although no amount of digital magic can match the golden thunder of a great hall vibrating in sympathy with Beethoven’s or Mahler’s orchestra, the Meyers may have come closer than anyone in audio history to an approximation of the real thing.” Whether or not electronically enhanced sound is any more “real” than other sound in concert halls, these new systems upend how the relationship between music and space can evolve, adding rapidly adaptable electronics to the protracted architectural work of changing the physical form of buildings. Meyer has now installed its system in concert halls from Vienna, to Shanghai, to San Francisco, mostly for subtle adjustment of reverberation. The grumblings of the 1990s have quieted down as active electronic enhancement has been accepted as another form of architectural modification in concert halls.
The most obvious and immediate benefit of these electronic systems is to vastly increase the versatility of the space, thus serving many needs in a community and increasing the financial stability of a venue. The “natural sound” of specialized opera houses or other single-use halls is a luxury enjoyed only where the wealthy congregate, mostly in large cities. Electronic adjustments to the acoustics of performance halls potentially bring sonic art to a wider audience, allowing spaces that were formerly limited by their poor and inflexible acoustics to now become diverse hubs in local cultural networks.
In a single week, National Sawdust hosts opera singers, jazz, a movie and lecture, a classical ensemble, solo piano, and electronic rock. Each has its own acoustic requirements, some of which are incompatible in a single space. For opera, we need a balance of reverberation and clarity. Classical ensembles require a little more liveliness from the walls. Medieval church music was written for long, cave-like reverberations. For cinema, absolute deadness is ideal, letting the soundtrack flow into the room with minimal sonic reflection. Rock music needs amplification, only slight reverberation from the room, and no odd frequency spikes or feedback as sound bounces back from the room to the microphones on stage. A lecture benefits from a hint of reverb to enrich the voice but not so much as to blur intelligibility. Electronic adjustment allows one space to meet all these needs. Other parts of the sensory experience of music venues—the grand vistas offered by an opera house, the aromas of old stone and incense in a cathedral, the pleasant tension in your legs as you climb the tiers of an amphitheater, the stickiness of spilled beer underfoot in a club—cannot of course be molded by microphones and speakers. But carefully designed electronics can open and diversify the sonic qualities of space.
A few months after the opening concert, I visit National Sawdust during the day to better understand how its new sound system fits with the organization’s mission. I sit at a small table in the center of the empty performance space with Paola Prestini, cofounder and artistic director; Garth MacAleavey, technical director and chief audio engineer; and Holly Hunter, director of projects and artist residencies.
As we talk, Garth touches the screen of a small electronic tablet. Tap. We’re talking in a recital hall, our words clear and rich. Tap. A cathedral with soaring resonance. Tap. A reverberation that goes on for five or more seconds, like standing inside a vast empty oil tanker. Tap. Dead. The warmth of our voices shrinks. We’re suddenly pushing harder to be heard. System reverb is off. Curtains hidden behind the paneling that forms the shell of the room absorb sound waves and eat our voices. Tap. A lecture hall, suddenly our words are clear and lively. We laugh nervously. The sudden flip is disconcerting. We feel completely natural, yet a click of a button transforms how it feels to hear one another and speak. A lesson: our voice comes from the larynx, but its sound and feel are born in relationship to the surroundings. Tap. A brook runs down one side of the room and four singing birds perch across the ceiling above us. Tap and slide. The brook moves to the center. Tap. We’re back in dead space. More astonished laughs.
For millennia, music has evolved with space. This close relationship is now mostly hidden because we hear music in spaces engineered for a good match. Opera in the opera house. Film score in the cinema. Rock in a club or through earbuds. Gregorian chant in the stone-walled church. Switch any of these pairings and the music is garbled, muddied, or deadened.
These close relationships reveal some of the reciprocity between space and innovation in the history of music. Instruments discovered in later Paleolithic caves—flutes, rasps, bull-roarers—are well suited to gatherings of a few dozen people. Louder instruments appeared when human societies grew and sound needed to travel farther. Drums and horns called people to war, the hunt, and religious gatherings. The first documented drums are from the millet- and rice-farming Dawenkou culture of eastern China, from about 4000 BCE. The first known trumpets are from the powerful eighteenth dynasty in Egypt in about 1500 BCE. When societies became large and hierarchical enough for political and religious rulers to build large spaces, the coordinated playing of many instruments filled these buildings with sound. In the third millennium BCE, harps and lyres appeared in royal tombs in Mesopotamia. The royal tombs of ancient Egypt were often stocked with instruments numerous enough to create ensembles. Wall paintings from these tombs and from temples show groups of dozens of musicians playing wind and stringed instruments. The grave of Marquis Yi of Zeng, from the fifth century BCE in China, contained an especially grand instrument, a three-tiered, chromatic-scale set of sixty-five large ornate bronze bells, sonic markers of prodigious wealth. The great philosopher of that age, Mozi, complained of the drain imposed on society’s time and resources by the “great bells and rolling drums, zithers and pipes” of the ruling classes. The first pipe organs were invented in Greece in the third century BCE and soon spread to the homes of the wealthy and public performance spaces of ancient Greece, Rome, and Alexandria.
Humanity’s creative exploration of sound through instruments was inspired by the tones and timbres of new materials and technologies—ceramics, strings, brass, bellows, valves—and each culture used its most sophisticated craft to build new instruments, just as Paleolithic ivory carvers had done. Increasing potential for loud sounds was one consequence of these technologies.
The present-day diversity of musical instruments reflects the importance of acoustic space in guiding culture and technology. This is most clearly seen when spaces change, opening new possibilities and needs for instruments. In Europe, the advent of large public concert halls in the nineteenth century demanded louder sounds than the small recital halls of the aristocracy. Instruments evolved in response. Compared with the first pianos of the sixteenth century, modern pianos are thunderous. The vigor of their sound increased as the sizes of concert halls increased and new discoveries in metallurgy allowed for stronger wires. The tension in the wires of a modern piano is ten times that of early instruments, an increase made possible by the nineteenth-century addition of solid metal internal piano frames. Tighter-wound metal wires also made violins louder, starting in the late seventeenth century. By the nineteenth century, the tension in violin strings was such that the bass bar, bridge, and fingerboard of older instruments had to be adjusted. The violin bow, too, was refashioned, making it longer and giving it a concave arch, the better to tighten horsehair and give players control. The concert flute was extensively modified in the nineteenth century, mostly through the work of one man, Theobald Boehm. He engineered larger tone holes, better keys, and a reshaped head and embouchure. Although Richard Wagner complained that the vigor of the new flutes made them “blunderbusses,” Boehm’s work established the flute’s place in the modern orchestra. Improvements in valves and keys also loudened and stabilized the sound of other woodwind and brass instruments. The great size of symphony halls became embodied in the forms of the instruments on stage. Orchestras expanded too, from Baroque orchestras of a few dozen to the more than one hundred players put on stage by Wagner and Mahler in the late nineteenth century.
Electric amplification also changes the relationship between instruments and space. The guitar, formerly an instrument suited to parlors, campfires, and other small gatherings, can now, with a mere brush of the hand, fill a stadium with sound. The guitar moved from a rarity in large public venues to near ubiquity in Western popular music. The nature of human song, too, was changed by electric amplification. Now a whisper or throaty croon into the microphone suffices, no projection or push from the diaphragm needed, a radical break from millennia of performance that required unaided lungs to fill places of worship, palaces, and concert halls. Just as the modern piano’s sound was partly born from the vastness of symphony halls, the breathy notes and throaty growls of contemporary popular music have as their parents the furnaces of electric power stations.
We create sonic space every time we press “play” on our smartphones and CD players at home. Because we have an abundant choice of music, albums and tracks are set into competition with one another for our attention. The loudest ones usually win, even if we think we have no preference for loudness. Our brains consistently judge louder music as “better.” More, our brains also prefer music that has had its quiet passages cranked louder. This psychological quirk sparked the “loudness wars,” starting with CDs in the 1990s and continuing to the present day. Producers increase the amplitude of every part of the music, turning the variable loudness of a piece of music into what they call a brick wall, a final product in which every part of the track is boosted to the highest level possible. The resulting sound file on a computer screen shows a tall and unvarying wall of intensity instead of the ups and downs of the volume of most live music. The overall impression is of louder, more present music. But the process eliminates the pop of percussive effects like snare drums, creates a sense of boxed-in tightness, and, in extreme cases, fuzzes the music with white noise.
Producers often disdain the process of “brick-walling” their albums but are pressured by musicians and marketers to push loudness upward. Two infamous examples are the albums Californication by the rock band the Red Hot Chili Peppers and Death Magnetic by the heavy metal band Metallica. Both were subject to petitions from fans demanding remastering to undo extreme brick-walling. Digital streaming services—another new sonic space—are now relieving some of the pressure. These platforms automatically adjust volumes to avoid jarring changes in loudness between tracks. This removes some of the incentive to push up amplitude on recordings. Many albums now are produced in two ways, one for digital streaming and one for CD. The digital version is often produced “as if for vinyl,” hearkening back to a world where the sounds of recorded music came from the physical motion of industrial diamond on rotating plastic. The cutting equipment for vinyl disks cannot cope with brick-walled sound, and so requires a subtler touch from the producer.
Earbuds and lightweight headphones, too, make new forms of sonic space. Like physical space and acoustic instruments, earbuds and portable music systems have coevolved. The evidence is here in my desk drawer. A thin metal headband connecting two foam-covered minispeakers connects to a 1980s-era pocket cassette player. White-wired earbuds dangle from the plug of a matchbox-sized MP3 player from 2005. Black over-the-ear headphones tangle their wires with a red-and-black set of plastic earbuds, listening devices intended for the three generations of smartphones that have passed them by. Each system is portable and convenient, encasing me in private experiences of music and voice over the decades. Each one has poor sound quality, delivering the outlines of music but not its subtleties. Low and high frequencies are mostly absent. Ambient noise penetrates the thin foam or plastic and washes out quieter sounds. And so on my flimsy 1980s headphones, music that arrived on cassettes from friends after multiple rounds of copying sounded pretty much as good as the original cassette. Later, with MP3 players and smartphones, there was little noticeable difference through inexpensive earbuds between CD-quality sound and highly compressed digital sound files.
The bootleg culture of cassette tape copying and, later, the early popularity of highly compressed digital audio files, many also pirated, were made possible in part by the low quality of earbuds and small headsets. The devices we poked onto or into our ears created a new sonic space and, as it always does, music changed according to the particular demands and possibilities of the space. Technology mediates this relationship, as it does in the analog world. Now, as noise-canceling headphones and better earbuds improve the “personal” listening space, richer music flows into our ears, aided by cheaper and faster data transmission.
The intimacy of headphones changes the relationship between music and listener too. Singers whisper directly through our earbuds and headphones. Compare the Grammy Song of the Year awarded in 2020 with that from 1970: Billie Eilish’s “Bad Guy” is a conspiratorial murmur. She’s right there, her lips to our ears. Joe South’s “Games People Play” is reverberant, distant. He’s on a stage with his band, the sound seeming to flow into an audience. The snap and shimmer of the instruments behind Eilish’s voice sound great on my coin-sized laptop speakers. The same speakers lop off the depth and blur the inflections of the violins, organ, and drums on South’s track. Music from 2020 sounds great on portable cheap speakers, but recordings from 1970 only sound good on more sophisticated audio equipment. The plastic capsules in our ear canals have changed the form of music.
Electronic sculpting of sound in performance venues, like that at National Sawdust, carries the digital revolution into three-dimensional spaces where people gather together to hear music. The technology weakens the link between form and sonic qualities of space, for the first time in the long history of musical evolution.
One effect will be a closer relationship between audience, musicians, and composers. When a performer works in a space mismatched to their music, they’re fighting against the room’s acoustics, as if trying to get their sounds, and thus their feelings and ideas, through a headwind. Tuning a room to the particular needs of music therefore activates connections among artists and audiences.

