In the Thick of It

A blog on the U.S.-Russia relationship
Graham Allison and Eric Schmidt

Eric Schmidt on Benefits, Challenges of AI

November 02, 2023
Conor Cunningham

During a discussion on the future of AI hosted by former director of the Belfer Center Graham Allison at the Harvard Institute of Politics, former Google chairman Eric Schmidt extrapolated the future of geopolitics, stating that “future national security issues will be determined by how quickly you can innovate against the solution.” Schmidt articulated his belief in the ultimate substitution of soft power with “innovative power” in geopolitics and candidly described the benefits of AI while admitting that the threats are “quite profound.” Schmidt spoke on six areas of significant relevance for the future of AI:

  1. Societal and economic implications;
  2. The impact of AI on geopolitics and national security;
  3. China's role and potential in AI;
  4. Innovations and technological developments emerging from Russia’s war in Ukraine;
  5. AI’s role in the future of warfare;
  6. The implications of AI for psychological/influence operations.

Key points

  1. While providing incredible benefits, such as doubling productivity, AI will also give rise to what Schmidt called “extreme risks,” the kind that could lead to the deaths of more than 10,000 people. Examples include the risk of large-scale cyber and biological attacks. These risks increase if no guardrails are placed on AI models and if financial incentives are “not aligned with human values.” To avoid extreme risks, governments globally will have to spearhead regulations concerning AI. However, Schmidt admitted that foreseeing and mitigating every potential misuse of AI, particularly those from open-source models accessible to everyone, is challenging. 
  2. “The future of national security is a very large number of distributed systems,” Schmidt said. Thus, the United States should replace its current surveillance system—one dominated by “a very small number of extremely exquisite surveillance systems”—with “an awful lot of cheap satellites.” As innovation plays an increasingly acute role in geopolitics, “It’s an awful lot more defensible because it’s very hard for the opponent in the war game to shoot down all of your surveillance systems over and over again.” 
  3. China remains “a couple of years” behind the U.S. because they were “late to the party,” lack access to the most advanced computing chips, and require a greater amount of Chinese language data to train AI systems, Schmidt said. Graham Allison—who has most recently co-authored an article with Henry Kissinger on paths to AI arms control—emphasized an additional roadblock to innovation: “If you are living in a society in which all wisdom and truth is contained in the thoughts of Xi Jinping, you can’t have your model coming to conclusions that are conflicting with it.” However, Schmidt emphasized that China can surpass the United States, and “they’re coming,” as previous Chinese technological successes like TikTok and the capability of Chinese scientists and engineers demonstrate. Allison added that three to four years ago, he and Schmidt were already urging policymakers to consider China as a “full-spectrum competitor” in the AI space.
  4. “Ukraine has become the laboratory of the world for drones” in response to the WWI-style battle that persists because, at least in part, Russia does not fully employ its navy and air force [both have been successfully targeted by Ukraine]. Schmidt highlighted that the cheapest drone costs $6,000, weighs 15 kilograms and “is enough to take out two tanks that cost $5 million each.” These are inconceivable prices in America, where “Predator drones cost around $20-30 million per drone.” Post-war Ukraine will boast cutting-edge military technologies, especially thanks to cutting-edge UAV production. 
  5. “You have to move very, very quickly. We don’t have time for a human in the loop.” The utilization of AI in military operations will soon present an ethical dilemma, including its deployment and defense strategies. While present U.S. military regulations mandate human intervention and supervision, it's conceivable that in the future, an AI system could autonomously determine and strike targets. The fast pace of AI means that unlike the four-year-long nuclear advantage the U.S. had from 1945 to 1949, who has an advantage will change rapidly and ultimately could lead to a new arms race, which Graham mentioned “is what we are seeing in the AI space in the early applications.”
  6. “We have lost the psychological war,” Schmidt admitted, “With a single computer, you can build an entire ecosystem, an entire set of worlds, and everyone’s narrative can all be different, but they can have an underlying manipulation theme—this is all possible today.” Thus, “elections in 2024 are going to be an unmitigated disaster,” first in India and then in the U.S. According to Schmidt, the only solution is for social media companies to come together to make recommendations to address disinformation and misinformation, including to “label the users, label the content, hold people responsible.” However, Schmidt is pessimistic, given that these measures run counter to social media company objectives of increasing profits since spreading and promoting emotionally charged and misleading information increases engagement. In other words, “the systems are being paid to make you upset,” Schmidt noted. 

Why it matters: As AI promises unparalleled economic gains, it simultaneously poses "extreme risks" that could redefine global security paradigms. While nations like China aspire to become leaders in the AI arena, the U.S. faces challenges in maintaining its tech dominance and recalibrating defense mechanisms, in this author's view. Moreover, the Ukraine conflict serves as a testament to how innovative technology, like AI, plays a decisive factor in modern warfare, a trend that will only increase with time. On the information front, AI could have devastating effects, especially if social media companies continue to promote models that reap benefits by proliferating extremism. To address the plethora of current and future issues connected with AI, the U.S. will need to play a difficult game by providing a model for AI guardrails without hampering innovation, in this author’s view.  

Conor Cunningham is a student associate with Russia Matters and a graduate student at Harvard University.

The opinions expressed herein are solely those of the individuals quoted and the author. Photo is a screenshot from Harvard University's video of the event.