New York Times
A.I. and nuclear weapons could make for a catastrophic combination.
An illustration depicting an orange-tinted explosion and mushroom cloud on the screen of a blue-tinted computer monitor.
Credit…Illustration by The New York Times; images by CSA Images and Kieran Stone/Getty Images
By Peter Coy
Rogue artificial intelligence versus humankind is a common theme in science fiction. It could happen, I suppose. But a more imminent threat is human beings versus human beings, with A.I. used as a lethal weapon by both sides. That threat is growing rapidly because there is an international arms race in militarized A.I.
What makes an arms race in artificial intelligence so frightening is that it shrinks the role of human judgment. Chess programs that are instructed to move fast can complete a game against each other in seconds; artificial intelligence systems reading each other’s moves could go from peace to war just as quickly.
On paper, military and political leaders remain in control. They are “in the loop,” as computer scientists like to say. But how should those looped-in leaders react if an A.I. system announces that an attack by the other side could be moments away and recommends a pre-emptive attack? Dare they ignore the output of the inscrutable black box that they spent hundreds of billions of dollars developing? If they push the button just because the A.I. tells them to, they are in the loop in name only. If they ignore it on a hunch, the consequences could be just as bad.
The intersection of artificial intelligence that can calculate a million times faster than people and nuclear weapons that are a million times more powerful than any conventional weapon is about as scary as intersections come.
ADVERTISEMENT
Henry Kissinger, who turns 100 years old on May 27, was born when warfare still involved horses. Now Kissinger, the secretary of state under Presidents Nixon and Ford, is contemplating A.I.-enabled warfare. I recently read “The Age of A.I. and Our Human Future,” the 2021 book he wrote with Eric Schmidt, a former chief executive and chairman of Google, and Daniel Huttenlocher, the inaugural dean of the M.I.T. Schwarzman College of Computing. It was rereleased last year with an afterword that noted some of the recent advances in A.I.
“The A.I. era risks complicating the riddles of modern strategy further beyond human intention — or perhaps complete human comprehension,” the three authors wrote.
The obvious solution is a moratorium on the development of militarized A.I. The Campaign to Stop Killer Robots, an international coalition, argues: “Life and death decisions should not be delegated to a machine. It’s time for new international law to regulate these technologies.”
But the chance of a moratorium is slim. Gregory Allen, a former director of strategy and policy at the Pentagon’s Joint Artificial Intelligence Center, told Bloomberg that efforts by Americans to reach out to their counterparts in China were unsuccessful.
The Americans are not going to pause development on militarized A.I. on their own. “If we stop, guess who is not going to stop: potential adversaries overseas,” the Pentagon’s chief information officer, John Sherman, said at a cybersecurity conference this month. “We’ve got to keep moving.”
Schmidt is pressing for development of American capabilities in militarized A.I. through the Special Competitive Studies Project, a foundation that’s part of the Eric & Wendy Schmidt Fund for Strategic Innovation. A report this month reiterates the project’s call for “military-technological superiority over all potential adversaries, including the People’s Liberation Army” of China.
On the crucial topic of keeping people in the loop, Schmidt’s project favors “human-machine collaboration” and “human-machine combat teaming.” The former is for decision-making and the latter is for “executing complex tasks, including in combat operations.” Working together, the report says, humans and machines can accomplish more than either could alone.
The Schmidt project doesn’t advocate autonomous weapons. But the fact is, the Pentagon already has some. As David Sanger noted in The Times this month, Patriot missiles can fire without human intervention “when overwhelmed with incoming targets faster than a human could react.” Even at that stage, the Patriots are supposed to be supervised by human beings. Realistically, though, if a computer can’t keep up in the fog of war, what chance does a person have?
Georges Clemenceau, who was France’s prime minister toward the end of World War I, said that war is too important to be left to military men. He meant that civilian leaders should make the final decisions. But the arms race in artificial intelligence could one day bring us to the point where civilian leaders will see no choice but to cede the final decisions to computers. Then war will be considered too important to be left to human beings.