Range, page 22
Taylor and Greve expected a typical industrial production learning curve: creators learn by repetition, so creators making more comics in a given span of time would make better ones on average. They were wrong. Also, as had been shown in industrial production, they guessed that the more resources a publisher had, the better its creators’ average product would be. Wrong. And they made the very intuitive prediction that as creators’ years of experience in the industry increased, they would make better comics on average. Wrong again.
A high-repetition workload negatively impacted performance. Years of experience had no impact at all. If not experience, repetition, or resources, what helped creators make better comics on average and innovate?
The answer (in addition to not being overworked) was how many of twenty-two different genres a creator had worked in, from comedy and crime, to fantasy, adult, nonfiction, and sci-fi. Where length of experience did not differentiate creators, breadth of experience did. Broad genre experience made creators better on average and more likely to innovate.
Individual creators started out with lower innovativeness than teams—they were less likely to produce a smash hit—but as their experience broadened they actually surpassed teams: an individual creator who had worked in four or more genres was more innovative than a team whose members had collective experience across the same number of genres. Taylor and Greve suggested that “individuals are capable of more creative integration of diverse experiences than teams are.”
They titled their study Superman or the Fantastic Four? “When seeking innovation in knowledge-based industries,” they wrote, “it is best to find one ‘super’ individual. If no individual with the necessary combination of diverse knowledge is available, one should form a ‘fantastic’ team.” Diverse experience was impactful when created by platoon in teams, and even more impactful when contained within an individual. That finding immediately reminded me of my own favorite comics creators. Japanese comics and animated-film creator Hayao Miyazaki may be best known for the dreamlike epic Spirited Away, which surpassed Titanic as the highest-grossing film ever in Japan, but his comics and animation career before that left almost no genre untouched. He ranged from pure fantasy and fairy tales to historical fiction, sci-fi, slapstick comedy, illustrated historical essays, action-adventure, and much more. Novelist, screenwriter, and comics author Neil Gaiman has a similarly expansive range, from journalism and essays on art to a fiction oeuvre encompassing both stories that can be read to (or by) the youngest readers as well as psychologically complex examinations of identity that have enthralled mainstream adult audiences. Jordan Peele is not a comics creator, but the writer and first-time director of the extraordinarily unique surprise hit Get Out struck a similar note when he credited comedy writing for his skill at timing information reveals in a horror film. “In product development,” Taylor and Greve concluded, “specialization can be costly.”
In kind environments, where the goal is to re-create prior performance with as little deviation as possible, teams of specialists work superbly. Surgical teams work faster and make fewer mistakes as they repeat specific procedures, and specialized surgeons get better outcomes even independent of repetitions. If you need to have surgery, you want a doctor who specializes in the procedure and has done it many times, preferably with the same team, just as you would want Tiger Woods to step in if your life was on the line for a ten-foot putt. They’ve been there, many times, and now have to re-create a well-understood process that they have executed successfully before. The same goes for airline crews. Teams that have experience working together become exceedingly efficient at delegating all of the well-understood tasks required to ensure a smooth flight. When the National Transportation Safety Board analyzed its database of major flight accidents, it found that 73 percent occurred on a flight crew’s first day working together. Like surgeries and putts, the best flight is one in which everything goes according to routines long understood and optimized by everyone involved, with no surprises.
When the path is unclear—a game of Martian tennis—those same routines no longer suffice. “Some tools work fantastically in certain situations, advancing technology in smaller but important ways, and those tools are well known and well practiced,” Andy Ouderkirk told me. “Those same tools will also pull you away from a breakthrough innovation. In fact, they’ll turn a breakthrough innovation into an incremental one.”
* * *
• • •
University of Utah professor Abbie Griffin has made it her work to study modern Thomas Edisons—“serial innovators,” she and two colleagues termed them. Their findings about who these people are should sound familiar by now: “high tolerance for ambiguity”; “systems thinkers”; “additional technical knowledge from peripheral domains”; “repurposing what is already available”; “adept at using analogous domains for finding inputs to the invention process”; “ability to connect disparate pieces of information in new ways”; “synthesizing information from many different sources”; “they appear to flit among ideas”; “broad range of interests”; “they read more (and more broadly) than other technologists and have a wider range of outside interests”; “need to learn significantly across multiple domains”; “Serial innovators also need to communicate with various individuals with technical expertise outside of their own domain.” Get the picture?
Charles Darwin “could be considered a professional outsider,” according to creativity researcher Dean Keith Simonton. Darwin was not a university faculty member nor a professional scientist at any institution, but he was networked into the scientific community. For a time, he focused narrowly on barnacles, but got so tired of it that he declared, “I am unwilling to spend more time on the subject,” in the introduction to a barnacle monograph. Like the 3M generalists and polymaths, he got bored sticking in one area, so that was that. For his paradigm-shattering work, Darwin’s broad network was crucial. Howard Gruber, a psychologist who studied Darwin’s journals, wrote that Darwin only personally carried out experiments “opportune for experimental attack by a scientific generalist such as he was.” For everything else, he relied on correspondents, Jayshree Seth style. Darwin always juggled multiple projects, what Gruber called his “network of enterprise.” He had at least 231 scientific pen pals who can be grouped roughly into thirteen broad themes based on his interests, from worms to human sexual selection. He peppered them with questions. He cut up their letters to paste pieces of information in his own notebooks, in which “ideas tumble over each other in a seemingly chaotic fashion.” When his chaotic notebooks became too unwieldy, he tore pages out and filed them by themes of inquiry. Just for his own experiments with seeds, he corresponded with geologists, botanists, ornithologists, and conchologists in France, South Africa, the United States, the Azores, Jamaica, and Norway, not to mention a number of amateur naturalists and some gardeners he happened to know. As Gruber wrote, the activities of a creator “may appear, from the outside, as a bewildering miscellany,” but he or she can “map” each activity onto one of the ongoing enterprises. “In some respects,” Gruber concluded, “Charles Darwin’s greatest works represent interpretative compilations of facts first gathered by others.” He was a lateral-thinking integrator.
Toward the end of their book Serial Innovators, Abbie Griffin and her coauthors depart from stoically sharing their data and observations and offer advice to human resources managers. They are concerned that HR policies at mature companies have such well-defined, specialized slots for employees that potential serial innovators will look like “round pegs to the square holes” and get screened out. Their breadth of interests do not neatly fit a rubric. They are “π-shaped people” who dive in and out of multiple specialties. “Look for wide-ranging interests,” they advised. “Look for multiple hobbies and avocations. . . . When the candidate describes his or her work, does he or she tend to focus on the boundaries and the interfaces with other systems?” One serial innovator described his network of enterprise as “a bunch of bobbers hanging in the water that have little thoughts attached to them.” Hamilton creator Lin-Manuel Miranda painted the same idea elegantly: “I have a lot of apps open in my brain right now.”
Griffin’s research team noticed that serial innovators repeatedly claimed that they themselves would be screened out under their company’s current hiring practices. “A mechanistic approach to hiring, while yielding highly reproducible results, in fact reduces the numbers of high-potential [for innovation] candidates,” they wrote. When I first spoke with him, Andy Ouderkirk was developing a class at the University of Minnesota partly about how to identify potential innovators. “We think a lot of them might be frustrated by school,” he said, “because by nature they’re very broad.”
Facing uncertain environments and wicked problems, breadth of experience is invaluable. Facing kind problems, narrow specialization can be remarkably efficient. The problem is that we often expect the hyperspecialist, because of their expertise in a narrow area, to magically be able to extend their skill to wicked problems. The results can be disastrous.
CHAPTER 10
Fooled by Expertise
THE BET WAS ON, and it was over the fate of humanity.
On one side was Stanford biologist Paul Ehrlich. In congressional testimony, on The Tonight Show (twenty times), and in his 1968 bestseller The Population Bomb, Ehrlich insisted that it was too late to prevent a doomsday apocalypse from overpopulation. On its lower left corner, the book cover bore an image of a fuse burning low, and a reminder that the “bomb keeps ticking.” Resource shortages would cause hundreds of millions of starvation deaths within a decade, Ehrlich warned. The New Republic alerted the world that the global population had already outstripped the food supply. “The famine has started,” it proclaimed. It was cold, hard math: human population was growing exponentially, the food supply was not. Ehrlich was a butterfly specialist, and an accomplished one. He knew full well that nature did not regulate animal populations delicately. Populations exploded, blew past the available resources, and crashed. “The shape of the population growth curve is one familiar to the biologist,” he wrote.
Ehrlich played out hypothetical scenarios in his book, representing “the kinds of disasters that will occur.” In one scenario, during the 1970s the United States and China start blaming one another for mass starvation and end up in a nuclear war. That’s the moderate scenario. In the bad one, famine rages across the planet. Cities alternate between riots and martial law. The American president’s environmental advisers recommend a one-child policy and sterilization of people with low IQ scores. Russia, China, and the United States are dragged into nuclear war, which renders the northern two-thirds of Earth uninhabitable. Pockets of society persist in the Southern Hemisphere, but the environmental degradation soon extinguishes the human race. In the “cheerful” scenario, population controls begin. The pope announces that Catholics should reproduce less, and gives his blessing to abortion. Famine spreads, and countries teeter. By the mid-1980s, the major death wave ends and agricultural land can begin to be rehabilitated. The cheerful scenario only forecast half a billion or so deaths by starvation. “I challenge you to create one more optimistic,” Ehrlich wrote, adding that he would not count scenarios involving benevolent aliens with care packages.
Economist Julian Simon took up Ehrlich’s challenge to create a more optimistic picture. The late 1960s was the prime of the “green revolution.” Technology from other sectors—water control techniques, hybridized seeds, management strategies—moved into agriculture, and global crop yields were increasing. Simon saw that innovation was altering the equation. More people would actually be the solution, because it meant more good ideas and more technological breakthroughs. So Simon proposed a bet. Ehrlich could choose five metals that he expected to become more expensive as resources were depleted and chaos ensued over the next decade. The material stakes were $1,000 worth of Ehrlich’s five metals. If, ten years hence, prices had gone down, Ehrlich would have to pay the price difference to Simon. If prices went up, Simon would be on the hook for the difference. Ehrlich’s liability was capped at $1,000, whereas Simon’s risk had no roof. The bet was made official in 1980.
In October 1990, Simon found a check for $576.07 in his mailbox. Ehrlich got smoked. The price of every one of the metals declined. Technological change not only supported a growing population, but the food supply per person increased year after year, on every continent. The proportion of people who are undernourished is too high until it is zero, but it has never been so low as it is now. In the 1960s, 50 of every 100,000 global citizens died annually from famine; now that number is 0.5. Even without the pope’s assistance, the world’s population growth rate began a precipitous decline that continues today. When child mortality declined and education (especially for women) and development increased, birth rates decreased. Humanity will need more innovation as absolute world population continues to grow, but the growth rate is declining, rapidly. The United Nations projects that by the end of the century human population will be near a peak—the growth rate approaching zero—or it could even be in decline.
Ehrlich’s starvation predictions were almost magically bad. He made them just as technological development was dramatically altering the global predicament, and right before the rate of population growth started a long deceleration. And yet, the very same year he conceded the bet, Ehrlich doubled down in another book. Sure, the timeline had been a little off, but “now the population bomb has detonated.” Despite one erroneous prediction after another, Ehrlich amassed an enormous following and continued to receive prestigious awards. Simon became a standard-bearer for scholars who felt that Ehrlich had ignored economic principles, and for anyone angry at an incessant flow of dire predictions that did not manifest. The kind of excessive regulations Ehrlich advocated, the Simon camp argued, would quell the very innovation that had delivered humanity from catastrophe. Both men became luminaries in their respective domains. And both were mistaken.
When economists later examined metal prices for every ten-year window from 1900 to 2008, during which time world population quadrupled, they saw that Ehrlich would have won the bet 62 percent of the time. The catch: commodity prices are a bad proxy for population effects, particularly over a single decade. The variable that both men were certain would vindicate their worldviews actually had little to do with them. Commodity prices waxed and waned with macroeconomic cycles, and a recession during the bet brought the prices down. Ehrlich and Simon might as well have flipped a coin and both declared victory.
Both men dug in. Each declared his faith in science and the undisputed primacy of facts. And each continued to miss the value of the other’s ideas. Ehrlich was wrong about population (and the apocalypse), but right on aspects of environmental degradation. Simon was right about the influence of human ingenuity on the food and energy supply, but wrong in claiming that improvements in air and water quality also vindicated his predictions. Ironically, those improvements failed to arise naturally from technological initiative and markets, and rather were bolstered through regulations pressed by Ehrlich and others.
Ideally, intellectual sparring partners “hone each other’s arguments so that they are sharper and better,” Yale historian Paul Sabin wrote. “The opposite happened with Paul Ehrlich and Julian Simon.” As each man amassed more information for his own view, each became more dogmatic, and the inadequacies in their models of the world more stark.
There is a particular kind of thinker, one who becomes more entrenched in their single big idea about how the world works even in the face of contrary facts, whose predictions become worse, not better, as they amass information for their mental representation of the world. They are on television and in the news every day, making worse and worse predictions while claiming victory, and they have been rigorously studied.
* * *
• • •
It started at the 1984 meeting of the National Research Council’s committee on American-Soviet relations. Newly tenured psychologist and political scientist Philip Tetlock was thirty years old, by far the most junior committee member. He listened intently as members discussed Soviet intentions and American policies. Renowned experts confidently delivered authoritative predictions, and Tetlock was struck by the fact that they were often perfectly contradictory to one another, and impervious to counterarguments.
Tetlock decided to put expert predictions to the test. With the Cold War in full swing, he began a study to collect short- and long-term forecasts from 284 highly educated experts (most had doctorates) who averaged more than twelve years of experience in their specialties. The questions covered international politics and economics, and in order to make sure the predictions were concrete, the experts had to give specific probabilities of future events. Tetlock had to collect enough predictions over enough time that he could separate lucky and unlucky streaks from true skill. The project lasted twenty years, and comprised 82,361 probability estimates about the future. The results limned a very wicked world.
The average expert was a horrific forecaster. Their areas of specialty, years of experience, academic degrees, and even (for some) access to classified information made no difference. They were bad at short-term forecasting, bad at long-term forecasting, and bad at forecasting in every domain. When experts declared that some future event was impossible or nearly impossible, it nonetheless occurred 15 percent of the time. When they declared a sure thing, it failed to transpire more than one-quarter of the time. The Danish proverb that warns “It is difficult to make predictions, especially about the future,” was right. Dilettantes who were pitted against the experts were no more clairvoyant, but at least they were less likely to call future events either impossible or sure things, leaving them with fewer laugh-out-loud errors to atone for—if, that was, the experts had believed in atonement.

