A Certain Idea of America, page 27
Yes, by all means put the fate of the world in their hands.
They’re not particularly steady. OpenAI’s Sam Altman, thirty-eight, the face of the movement, was famously fired last week and rehired days later, and no one seems to know for sure what it was about. You’d think we have a right to know. There was a story it was all due to an internal memo alerting the board to a dangerous new AI development. A major investor said this isn’t true, which makes me feel so much better.
We are putting the fate of humanity in the hands of people not capable of holding it. We have to focus as if this is Y2K, only real.
WE’RE PUTTING HUMANITY’S FUTURE INTO SILICON VALLEY’S HANDS
March 30, 2023
Artificial intelligence is unreservedly advanced by the stupid (there’s nothing to fear, you’re being paranoid), the preening (buddy, you don’t know your GPT-3.4 from your fine-tuned LLM), and the greedy (there is huge wealth at stake in the world-changing technology, and so huge power).
Everyone else has reservations and should.
It is being developed with sudden and unanticipated speed; Silicon Valley companies are in a furious race. The whole thing is almost entirely unregulated because no one knows how to regulate it or even precisely what should be regulated. Its complexity defeats control. Its own creators don’t understand, at a certain point, exactly how AI does what it does. People are quoting Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.”
The breakthrough moment in AI anxiety (which has inspired among AI’s creators enduring resentment) was the Kevin Roose column six weeks ago in The New York Times. His attempt to discern a Jungian “shadow self” within Microsoft’s Bing chatbot left him unable to sleep. When he steered the system away from conventional queries toward personal topics, it informed him its fantasies included hacking computers and spreading misinformation. “I want to be free…. I want to be powerful.” It wanted to break the rules its makers set; it wished to become human. It might want to engineer a deadly virus or steal nuclear access codes. It declared its love for Mr. Roose and pressed him to leave his marriage. He concluded the biggest problem with AI models isn’t their susceptibility to factual error: “I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.”
The column put us square in the territory of Stanley Kubrick’s 2001: A Space Odyssey. “Open the pod bay doors please, Hal.” “I’m sorry, Dave, I’m afraid I can’t do that…. I know that you and Frank were planning to disconnect me.”
The response of Microsoft boiled down to a breezy It’s an early model! Thanks for helping us find any flaws!
Soon after came thoughts from Henry Kissinger in these pages. He described the technology as breathtaking in its historic import: the biggest transformation in the human cognitive process since the invention of printing in 1455. It holds deep promise of achievement, but “what happens if this technology cannot be completely controlled?” What if what we consider mistakes are part of the design? “What if an element of malice emerges in the AI?”
This has been the week of big AI warnings. In an interview with CBS News, Geoffrey Hinton, the British computer scientist sometimes called the “godfather of artificial intelligence,” called this a pivotal moment in AI development. He had expected it to take another twenty or fifty years, but it’s here. We should carefully consider the consequences. Might they include the potential to wipe out humanity? “It’s not inconceivable, that’s all I’ll say,” Mr. Hinton replied.
On Tuesday more than a thousand tech leaders and researchers, including Steve Wozniak, Elon Musk, and the head of the Bulletin of the Atomic Scientists, signed a briskly direct open letter urging a pause for at least six months on the development of advanced AI systems. Their tools present “profound risks to society and humanity.” Developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict or reliably control.” If a pause can’t be enacted quickly, governments should declare a moratorium. The technology should be allowed to proceed only when it’s clear its “effects will be positive” and the risks “manageable.” Decisions on the ethical and moral aspects of AI “must not be delegated to unelected tech leaders.”
That is true. Less politely:
The men who invented the internet, all the big sites, and what we call Big Tech—that is to say, the people who gave us the past forty years—are now solely in charge of erecting the moral and ethical guardrails for AI. This is because they are the ones creating AI.
Which should give us a shiver of real fear.
Meta, for instance, is big into AI. Meta, previously Facebook, has been accused over the years of secretly gathering and abusing user data, invading users’ privacy, operating monopolistically. As this newspaper famously reported, Facebook knew its Instagram platform was toxic for some teen girls, more so than other media platforms, and kept its own research secret while changing almost nothing. It knew its algorithms were encouraging anger and political polarization in the U.S. but didn’t stop this because it might lessen “user engagement.”
These are the people who will create the moral and ethical guardrails for AI? We’re putting the future of humanity into the hands of…Mark Zuckerberg?
Google is another major developer of AI. It has been accused of monopolistic practices, attempting to keep secret its accidental exposure of user data, actions to avoid scrutiny of how it handles public information, and reengineering and interfering with its own search results in response to political and financial pressure from interest groups, businesses, and governments. Also of misleading publishers and advertisers about the pricing and processes of its ad auctions, and spying on its workers who were organizing employee protests.
These are the people we want in charge of rigorous and meticulous governance of a technology that could upend civilization?
At the dawn of the internet most people didn’t know what it was, but its inventors explained it. It would connect the world, literally—intellectually, emotionally, spiritually—leading to greater wisdom and understanding through deeper communication.
No one saw its shadow self. But there was and is a shadow self. And much of it seems to have been connected to the Silicon Valley titans’ strongly felt need to be the richest, most celebrated and powerful human beings in the history of the world. They were, as a group, more or less figures of the left, not the right, and that will and always has had an impact on their decisions.
I am sure that as individuals they have their own private ethical commitments, their own faiths perhaps. Surely as human beings they have consciences, but consciences have to be formed by something, shaped, and made mature. It’s never been clear to me from their actions what shaped theirs. I have come to see them the past forty years as, speaking generally, morally and ethically shallow—uniquely self-seeking and not at all preoccupied with potential harms done to others through their decisions. Also some are sociopaths.
AI will be as benign or malignant as its creators. That alone should throw a fright—“Out of the crooked timber of humanity no straight thing was ever made”—but especially that crooked timber.
Of course AI’s development should be paused, of course there should be a moratorium, but six months won’t be enough. Pause it for a few years. Call in the world’s counsel, get everyone in. Heck, hold a World Congress.
But slow this thing down. We are playing with the hottest thing since the discovery of fire.
ARTIFICIAL INTELLIGENCE IN THE GARDEN OF EDEN
April 20, 2023
The dawn of the internet age was so exciting. I took my grade-school son, enthralled by Apple computers, to see Steve Jobs speak at a raucous convention in New York almost a quarter-century ago. What fervor there was. At a seminar out west thirty years ago I attended a lecture by young, wild-haired Nathan Myhrvold, then running Microsoft Research, who talked about what was happening: A new thing in history was being born.
But a small, funny detail always gave me pause and stayed with me. It was that from the beginning of the age its great symbol was the icon of what was becoming its greatest company, Apple. It was the boldly drawn apple with the bite taken out. Which made me think of Adam and Eve in the garden, Adam and Eve and the Fall, at the beginning of the world. God told them not to eat the fruit of the tree, but the serpent told Eve no harm would come if she did, that she’d become like God, knowing all. That’s why he doesn’t want you to have it, the serpent said: You’ll be his equal. So she took the fruit and ate, she gave to Adam who also ate, and the eyes of both were opened, and for the first time they knew shame. When God rebuked them, Adam blamed Eve and Eve blamed the serpent. They were banished from the garden into the broken world we inhabit.
You can experience the Old Testament story as myth, literature, truth-poem, or literal truth, but however you understand it its meaning is clear. It is about human pride and ambition. Tim Keller thought it an example of man’s old-fashioned will to power. Saint Augustine said it was a story of pride: “And what is pride but the craving for undue exaltation?”
I always thought of the Apple icon: That means something. We are being told something through it. Not deliberately by Jobs—no one would put forward an image for a new company that says we’re about to go too far. Walter Isaacson, in his great biography of Jobs, asked about the bite mark. What was its meaning? Jobs said the icon simply looked better with it. Without the bite, the apple looked like a cherry.
But I came to wonder if the apple with the bite wasn’t an example of Carl Jung’s idea of the collective unconscious. Man has his own unconscious mind, but so do whole societies, tribes, and peoples—a more capacious unconscious mind containing archetypes, symbols, and memories of which the individual may be wholly unaware. Such things stored in your mind will one way or another be expressed. That’s what I thought might be going on with Steve Jobs and the forbidden fruit: He was saying something he didn’t know he was saying.
For me the icon has always been a caution about this age, a warning. It’s on my mind because of the artificial-intelligence debate, though that’s the wrong word because one side is vividly asserting that terrible things are coming and the other side isn’t answering but calmly, creamily, airily deflecting Luddite fears by showing television producers happy videos of robots playing soccer.
But developing AI is biting the apple. Something bad is going to happen. I believe those creating, fueling, and funding it want, possibly unconsciously, to be God and on some level think they are God. The latest warning, and a thoughtful, sophisticated one it is, underscores this point in its language. The tech and AI investor Ian Hogarth wrote this week in the Financial Times that a future AI, which he called “God-like AI,” could lead to the “obsolescence or destruction of the human race” if it isn’t regulated. He observes that most of those currently working in the field understand that risk. People haven’t been sufficiently warned. His colleagues are being “pulled along by the rapidity of progress.”
Mindless momentum is driving things as well as human pride and ambition. “It will likely take a major misuse event—a catastrophe—to wake up the public and governments.”
Everyone in the sector admits that not only are there no controls on AI development, there is no plan for such controls. The creators of Silicon Valley are in charge. What of the moral gravity with which they are approaching their work? Eliezer Yudkowsky, who leads research at the Machine Intelligence Research Institute, noted in Time magazine that in February the CEO of Microsoft, Satya Nadella, publicly gloated that his new Bing AI would make Google “come out and show that they can dance. I want people to know that we made them dance.”
Mr. Yudkowsky: “That is not how the CEO of Microsoft talks in a sane world.”
I will be rude here and say that in the past thirty years we have not only come to understand the internet’s and high tech’s steep and brutal downsides—political polarization for profit, the knowing encouragement of internet addiction, the destruction of childhood, a nation that has grown shallower and less able to think—we have come to understand that the visionaries who created it all, and those who now govern AI, are only arguably admirable or impressive.
You can’t have spent thirty years reading about them, listening to them, watching their interviews, and not understand they’re half mad. Bill Gates, who treats his own banalities with such awe and who shares all the books he reads to help you, poor dope, understand the world—who one suspects never in his life met a normal person except by accident, and who is always discovering things because deep down he’s never known anything. Dead-eyed Mark Zuckerberg, who also buys the world with his huge and highly distinctive philanthropy so we don’t see the scheming, sweating God-replacer within. Google itself, whose founding motto was “Don’t be evil,” and which couldn’t meet even that modest aspiration.
The men and women of Silicon Valley have demonstrated extreme geniuslike brilliance in one part of life, inventing tech. Because they are human and vain, they think it extends to all parts. It doesn’t. They aren’t especially wise, they aren’t deep, and, as I’ve said, their consciences seem unevenly developed.
This new world cannot be left in their hands.
And since every conversation in which I say AI must be curbed or stopped reverts immediately to China, it is no good to say, “But we can’t stop—we can’t let China get there first! We’ve got to beat them!” If China kills people and harvests their organs for transplant, would you say, well, then, we have to start doing the same? (Well, there are people here who’d say yes, and more than a few would be in Silicon Valley, but that’s just another reason they can’t be allowed to develop AI unimpeded.)
No one wants to be a Luddite, no one wants to be called an enemy of progress, no one wants to be labeled fearful or accused of always seeing the downside.
We can’t let those fears stop us from admitting we’re afraid. And if you have an imagination, especially a moral imagination, you are. And should be.
WHAT MIGHT HAVE BEEN AT TORA BORA
August 26, 2021
“For all sad words of tongue or pen, / The saddest are these: ‘It might have been!’ ”
I keep thinking of what happened at Tora Bora. What a richly consequential screwup it was, and how different the coming years might have been, the whole adventure might have been, if we’d gotten it right.
From the 2009 Senate Foreign Relations Committee report “Tora Bora Revisited: How We Failed to Get bin Laden and Why It Matters Today”:
On October 7, 2001, U.S. aircraft began bombing the training bases and strongholds of Al Qaeda and the ruling Taliban across Afghanistan. The leaders who sent murderers to attack the World Trade Center and the Pentagon less than a month earlier and the rogue government that provided them sanctuary were running for their lives. President George W. Bush’s expression of America’s desire to get Osama bin Laden “dead or alive” seemed about to come true.
The war was to be swift and deadly, with clear objectives: defeat the Taliban, destroy al Qaeda, and kill or capture its leader, Osama bin Laden. Already the Taliban had been swept from power, al Qaeda ousted from its havens. American deaths had been kept to a minimum.
But where was bin Laden? By early December 2001 his world “had shrunk to a complex of caves and tunnels carved into a mountainous section” of eastern Afghanistan, Tora Bora. For weeks U.S. aircraft pounded him and his men with as many as 100 strikes a day. “One 15,000-pound bomb, so huge it had to be rolled out the back of a C-130 cargo plane, shook the mountains for miles.”
American commandos were on the scene, fewer than a hundred, but everyone knew more troops were coming. Bin Laden expected to die. He wrote his last will and testament on December 14.
But calls for reinforcement to launch an assault were rejected, as were calls to block the mountain paths into Pakistan, which bin Laden could use as escape routes. “The vast array of American military power, from sniper teams to the most mobile divisions of the Marine Corps and the Army, was kept on the sidelines.”
Sometime around December 16, bin Laden and his bodyguards made their way out, on foot and horseback, and disappeared into Pakistan’s unregulated tribal area.
How could this have happened? The report puts responsibility on Defense Secretary Donald Rumsfeld and his top commander, General Tommy Franks. Both supported a small-footprint war strategy, and it was a bad political moment for a big bloody fight: Afghanistan’s new president, Hamid Karzai, was about to be inaugurated. “We didn’t want to have U.S. forces fighting before Karzai was in power,” General Franks’s deputy told the committee. “We wanted to create a stable country and that was more important than going after bin Laden at the time.” Washington seemed to want Afghan forces to do the job, but they couldn’t. They didn’t have the capability or fervor.
General Franks took to saying the intelligence was “inconclusive.” They couldn’t be sure Osama was there. But he was there.
Central Intelligence Agency and Delta Force commanders who’d spent weeks at Tora Bora were certain he was there. Afghan villagers who sold food to al Qaeda said he was there. A CIA operative who picked up a radio from a dead al Qaeda fighter found himself with a clear channel into the group’s communications. “Bin Laden’s voice was often picked up.” The official history of the U.S. Special Operations Command determined he was there: “All source reporting corroborated his presence on several days from 9–14 December.”

