Sounds Wild and Broken, page 3
Without a diverse vocabulary for hearing, our minds lapse into inattention and our imagination is limited. Hobbled by weak verbs, language must draw on adjectives, adverbs, and analogies. A shrimp claw listens spikily, perhaps, through narrowly tuned hairs. A fish’s low-frequency lateral line hearing is oozy, deep, and fluid. The birds’ aural attention, fueled by high body temperatures, is fevered and has a narrower range of pitch perception than ours, trimmed off at its top by a stumpy, uncoiled cochlea. Is bacterial hearing like pressing a trembling thumb into jelly, viscous and enveloping?
Yet despite the limitations of language and human sense organs, our experiences of the world are encouragements to imagination. Listening opens our minds to other ways of being. At any place on Earth, thousands of parallel sensory worlds coexist, the diverse productions of evolution’s creative hand. We cannot hear with the ears of others, but we can listen and wonder.
At the dock, in my headphones, a whir cuts into the fish and shrimp sounds. It builds in loudness over five seconds then abruptly ends. Cough. Another sputter. An outboard engine has been lowered—the whir was its electric motor easing down the blades—and is now cranking. Two more turns of the starter and the engine comes alive.
The engine’s voice clouds the water, a chug pitched at about the frequency of human speech. The shrimp keep on crackling and their sound joins the outboard in my ears, two textures, one growly, one sparkly, each holding steady. The outboard idles for a minute, then, in an instant, roars. The propellers are spinning, shredding the water. As the boat pulls away, the intensity of the sound wavers, perhaps as the propeller turns toward and away from my hydrophone. Over the next minute, through the hydrophone, I hear the noise climb in frequency, up three octaves from the start, as the engine’s scream fades into the distance. The croaker keeps pulsing its thumping song every ten seconds or so. The silver perch and oyster toadfish fall silent.
Sensory Bargains and Biases
Like a painter applying a delicate brushstroke to a canvas, my audiologist extends her arm and slides a slender foam plug into my right ear. A thin tube runs from the plug to an electronic console and a laptop. A gurgle bursts into my ear. Then the room stills. In the quiet, my senses waken: Winter sun through dusty clinic windows. Odor of floor cleaner and latex. A metal cart clinks far down the hallway.
Suddenly a high-pitched tone darts into the foam-plugged ear. No, I’m wrong, not a single tone but a weird two-note chord. It pulses, repeats, and pulses again, quieter. Then more tones, lower pitched. We’re running down a series. Every time a sound hits my ear, two spikes leap from a trembling horizontal line on a graph on the laptop screen.
Unlike the hearing test I took last month, squeezing a trigger whenever I heard a tone, I now sit empty-handed. This test directly probes the cilia-bearing hair cells of my inner ear, with no conscious involvement on my part. On the screen, I see the graph twitch with every burst of sound. Sometimes the graph kicks up, but I hear nothing.
My audiologist loops the tube and earplug to my left ear. She clicks the machine back on. Another gurgle. Silence. Then come the tones, working their way through the sequence. Now that I’ve figured out how to read the graph, I stare unblinking at the line, waiting. There it is: my ear answering back! Just to the left of the two big spikes is a third, a miniature, that pokes up whenever sound floods my ear. It is ankle high to its tall companions, but jabs up always in synchrony with them. Nearly always. For some sounds, even ones that I can hear, the junior spike is absent or merely flutters.
The small spike on the graph shows me the hair cells of my inner ear in action. When the incoming double tone hits them, they shoot out a pulse of sound in answer. This reply is too quiet for me to hear, but the microphone picks up its signal. My ears, then, are not passive receivers of sound. They are active participants in the process of hearing, making their own vibrations. This ability comes from the cilia-bearing cells in the inner ear, descendants of the oar-like hairs on the membranes of ancient free-living cells, now lodged in watery coils in my head.
As I sit in the sterile, white-walled examination room, thinking of the motions of these tiny hairs, my imagination turns to pond scum. One of my favorite exercises with students is to scoop up some slimy ditch or lake water and peer into the lively throng through a microscope. The unaided eye sees only slime. Glass lenses directed at microscope slides reveal dozens of species in every drop. Some species, especially the emerald cells of the larger algae, creep like cargo ships maneuvering in port. Others, tethered by slender tails to fragments of vegetation, pump globular heads back and forth, wafting bacteria into cuplike maws. Green globules zip past, leaving eddying wakes. Glassy needles glide. Slipper-shaped cells spiral, halt, reverse, then set off again in new directions.
The motion we see under the microscope is all driven by cilia. Some cells have hundreds, a beating pelt, others have just a single one, elongated into what we call a flagellum. The beating of each cilium is powered by ten paired protein columns. Each of these columns is made from a coil of thousands of tiny subunits. Cross-linking proteins connect the columns. Rapid changes in the links among these proteins slide the columns over one another, driving the hairs’ motions. Shuttle proteins run alongside the columns, replenishing and repairing the lively, flexing meshwork. To call this dynamism a “hair” is a convenient shorthand, but belies the inner complexity of the cilium.
Cilia on free-living cells beat at rates from one to one hundred times per second. If we could hear them, the sound would be a hum at and below the lowest pitches that our ears can grasp. But like the shivers of bacteria, these motions disturb only a thin layer of fluid around each cell, too quiet for human ears to detect.
All the descendant lineages of the first eukaryotes possess cilia, although many fungi have lost theirs. We are one of the ciliated descendants. The beating hairs in the pond scum under the microscope seem exotic appendages with little connection to our human bodies. But these unfamiliar motions are a reminder of the hidden activities of our own bodies.
Cilia line the passageways to our lungs, wafting out impurities. Eggs are swept along Fallopian tubes by beating cilia, and sperm cells are powered by waggling flagella. Our brains and spinal columns are washed by fluid circulated by ciliary hairs, and cilia coordinate the embryonic development of our organs. The light receptors in our eyes are modified cilia, the tips of their hairs no longer moving but welcoming light on their protruding arms. News of odors travels to our nerves via cilia that grab aromatic molecules. Our kidneys use cilia to sense, without our conscious awareness, urine flow and to regulate the growth of the kidneys’ network of tubes.
We also hear with cilia. Each of the fifteen thousand sound-sensitive cells in our inner ears is crowned with a cilium bundled with smaller hairs. As a sound wave flows through the inner ear, its motions deflect these bundles. This movement causes the cells to signal to the nervous system. Physical motion is thus alchemized by cilia into bodily sensation.
Outwardly, complex animals seem to have little in common with the cells that swarm through pond scum and ocean water. Yet the vitality of our bodies and the richness of our sensory experience are grounded in the very same cellular structures that power our single-celled relatives. When we perceive sound or light or aroma, we experience deep kinship, a shared cellular heritage.
The cilia in my ears, mounted atop hair cells, are arrayed along a membrane sandwiched between coiled tubes of fluid. These coils, one for each ear, form the cochleas. Each is the size of a fat pea, and they are lodged in the skull just beyond the eardrums. The cochlear membrane is narrow and stiff at the end closest to the eardrum, but wide and floppy at the apex of the coil. High-frequency sounds cause the narrow end to vibrate. Low sounds stimulate the wide part. Every frequency within the range of human hearing thus has a place along the membrane’s gradient of sound sensitivity, as if we had coiled up piano keyboards in our inner ears. Complex patterns of sound, like music or speech, stimulate waves at multiple places along the membrane’s length. Vibrations are picked up by hair cells on the inner part of the membrane, the edge closest to the center of the cochlea’s coil. These signal via the cochlear nerve to the brain.
Vigorous sounds have enough energy to buck the cochlear membrane and stimulate inner hair cells. But quieter sounds are too weak. Alone, they cannot trigger nerve impulses. Hair cells on the outer part of the membrane give these softer sound waves a boost so that the inner hair cells can perceive them. Outer hair cells are three times more numerous than those on the inner part of the membrane, underscoring their importance.
When a sound wave of the right frequency hits the outer hair cells, a protein leaps into action, pumping the cells up and down. The protein, prestin, is the fastest-known force generator in living cells. The up and down motion of the outer hair cells amplifies the wave, turning an anemic shiver into a surge. The magnified wave triggers the waiting inner hair cells. The teamwork of outer and inner hair cells allows us to perceive sound across a millionfold difference in energy levels, from a snowflake falling into a drift in the quiet woods to the clap of thunder echoing in a canyon.
What I see on the audiologist’s screen is the activity of my outer hair cells. Normally the cells would pulse with the same frequency as the incoming waves. But the test I’m undergoing throws them into confusion. The two incoming tones are precisely calibrated to hit the membrane very close together and, like two people shaking a rug at slightly different rates, the activated outer hair cells cause the membrane to judder with the weird collision of these two drivers. Part of this judder—a harmless distortion of the waves in my ear—then flowed back out of the cochlea. The third spike on the screen was the squeal of my outer hair cells.
At the end of the test, my audiologist clicks at her laptop and the spiking lines disappear, replaced with a graph that shows how my hair cells performed. At low sound frequencies, the cells did fine in both ears. In my right ear, those tuned to higher frequencies have stopped bouncing or have slowed their motion. In my left ear, it is those focused on the midranges that have quieted. These inactive cells are not resting or asleep, they’re defunct. Unlike birds that can regrow damaged hair cells, human inner ear cells get one life only.
The crystal ball, my audiologist calls this test. For someone in their fifties, my results are unexceptional. In future years, more hair cells will bow out, especially in the higher frequencies.
Most of us are born with hale outer hair cells, full of vim all up and down the cochlear membrane. But from then on, it’s all downhill, part of the cellular die-off that marks time in our bodies. We can hasten the decline with loud sounds—guns, power tools, amplified music, engine rooms—and with medications poisonous to hair cells, including common drugs like neomycin and high doses of aspirin. But even a life spent drug-free in quiet surrounds would not protect our ears from the erosive power of passing years.
Such is the cost of living in a body richly endowed with sense organs. Our every sensory experience is mediated by cells. Aging is a cellular process. Over time, cells accumulate defects in their form and DNA, eventually slowing or ceasing their work. And so to experience the passage of time in an animal body is to experience sensory diminishment. This is the deal evolution has bequeathed us: we get to enjoy sensory experience, but in bodies where the scope of perception dwindles as we age. The only animals known to have broken this deal are freshwater-dwelling relatives of jellyfish called Hydra. Their body consists of a sac topped by tentacles. Nerves weave through the body in a net, with no brain or complex sense organs. This uncomplicated design, made from a handful of cell types, allows Hydra to regularly purge and replace any defective cells. They live without any signs of aging. But these eternally youthful, inverted jellyfish have only rudimentary senses: a hazy grasp of sound and light delivered by single cells buried in their skins. Our bodies are too complex to self-renew as Hydra does. But we therefore have more well-developed senses, mediated by complex organs. We can blame advancing deafness and the other diminishments of age on Faustian forebears. They exchanged ageless bodies for richly sensual lives. This evolutionary bargain was forced on them by one of life’s seemingly unbreakable rules: all complex cells and bodies must age and die.
I mourn the progressive loss of my hearing. The voices and music of people, birds, and trees give me connection, meaning, and joy. But alongside the sadness, I try to accept and enjoy evolution’s bequest. These diverse voices exist only because our bodies are complex and therefore ephemeral.
Our hearing cells and organs not only lock us into a trajectory of aging. They also bias sensory experience. It is not the case that in my youth I had perfect hearing and now I’ve lost some of this transparent connection to the world. Even before my hair cells started dying off, what I heard was highly mediated. Everything that I hear is an imperfect rendering. The inner and outer worlds converse and entangle in my ears.
My mind protests. Sound is sound, surely? Am I not just hearing what surrounds me, connected to the world by open ears? No. This is an illusion. What we perceive is a translation of the world and every translator has special talents, errors, and opinions. Sitting in the clinic, gazing at spikes on a graph, I’m seeing the chatter of my cochlear hair cells. I’m face-to-face with part of the hidden chain of interpretation. Along every step of the path from external sound to internal perception, our body edits and distorts.
The ear trumpets, pinnae, on either side of our heads, along with the ear canal, amplify sound by fifteen to twenty decibels. This boost is the equivalent to walking across a large room to stand next to someone who is talking. Sound waves also bounce around the cups and folds of the pinnae. This clash of waves cancels out some high frequencies. Push your ear flaps forward. You’ll hear a change in brightness. As we move our heads, the sound reflections shift, cutting out slightly different frequencies. From these subtleties, our brain extracts information about where sound is located on the vertical plane. We edit sound even as it enters the ear canal.
The middle ear—the eardrum and three ear bones—has the task of converting sound vibrations in air to vibrations in the fluid inside the cochlea. The air-to-water transition faces a physical challenge. When a wave in air hits water, most of the energy bounces back. This is one reason why we can’t hear poolside chatter when we swim underwater. To solve this problem, the tiny bones of the middle ear gather vibrations from the relatively large eardrum and, using the levering action of the longer “hammer” bone pivoting onto the shorter “anvil” and “stirrup” bones, these bones focus vibrations onto a much smaller window leading to the cochlea’s watery tubes. This conversion both amplifies, increasing the pressure of sound waves by about twenty times, and puts a slight filter on the sound, trimming extremely high and low frequencies.
Then the cochlea imposes a more severe filter. The upper and lower ends of our hearing are set by the sensitivity of cochlea. The stiffness of the membrane, the responsiveness of outer hair cells, and the tuning of nerve sensitivities determine not only upper and lower bounds of our perception of pitch but also our ability to discriminate among sound frequencies. In general, we can discriminate among pitches of one-twentieth of a half step on a piano keyboard. Between the notes B and C, for example, we can potentially, if we concentrate, hear twenty additional microtones. But this is only true for quieter sounds. Our ears hear subtle differences in pitch in whispered or spoken words, but for shouts our discrimination of pitch is coarser. Intense sound bucks the cochlear membrane and overwhelms auditory nerves. We have finer discrimination at lower frequencies than high. The shrill sounds of high-pitched insect songs, for example, all sound about the same pitch to us, even for those that, when experienced with the objectivity of a graph of sound frequencies, differ significantly. But for the lower sounds of human speech, we perceive subtle differences among sound frequencies.
Nerve signals and the brain’s processing add their own layers of interpretation. Nerves in the cochlea fire when inner hair cells are stimulated. Each of these cells responds to a particular range of sound frequencies corresponding to its place on the high-to-low scale of the cochlear membrane. The width of these ranges and their overlap set another limit on frequency discrimination. The nerve impulses from the cochlea then flow to the auditory nerve through a series of processing centers in the brain stem and then to the cerebral cortex. There, the brain interprets incoming signals in the context of expectations, memories, and beliefs. What passes into conscious perception is an interpretation, not a transcript. This is most vividly illustrated by auditory illusions. By playing different sounds into each ear or by looping sounds to create repetition, pioneering acoustic psychologist Diana Deutsch found that she could trick the brain into hearing phantom words and melodies. These illusions reveal that what we “hear” emerges from the brain’s attempts to extract order from incoming signals, even when no such order exists. The words and melodies that we hear are partly a product of our background, each of us hearing words and music relevant to our culture.
Our brains do not just receive input from the ears, they send out signals to the ears, adjusting the cochlea to local conditions. In noisy environments, the brain suppresses the sensitivity of the outer hair cells, like a hand reaching out to crank down the volume on a loudspeaker. This reduces the masking effect of noise, allowing meaningful sounds to be more clearly distinguished. The hair cells in our ears are less jumpy in a noisy restaurant, for example, than they are in a quiet forest.
These layers of interpretation bias our perceptions of loudness. When we walk on pavement, for example, we perceive the sound as about twice as loud as footsteps on soft grass. This accords with the increase in sound intensity, the amount of energy hitting our eardrums. But in a carpentry workshop, our ears mislead us. The circular saw sounds about twice or three times as loud as the power drill. But the actual sound intensity, the rate at which energy pounds our ears, is about one hundred times higher. The extent of this biased perception depends, too, on sound frequency. For loud low-frequency sounds—the clap of thunder, for example—muscles tug on middle ear bones, dialing back the intensity of the sound that flows to the cochlea. But for loud high-frequency sounds such as power tools, this protective reflex is weaker.

