Kerrie Smith 1

Hope in the Age of AI

Margaret Hu

Artwork: "Freedom Flight" by Kerrie Smith © 2019

On May 30, 2023, the Center for AI Safety (CAIS) released a statement signed by dozens of leading researchers and industry executives, warning that AI posed an unprecedented threat to humanity: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”1

The one-sentence CAIS statement, ambiguous and foreboding, was signed by OpenAI CEO Sam Altman; computer scientist and godfather of AI, Geoffrey Hinton, who publicly departed Google in May 2023, citing concerns about AI risks; Bruce Schneier, cybersecurity pioneer and Chief of Security Architecture at Inrupt; Kevin Scott, Chief Technology Officer at Microsoft; and other prominent technologists. It immediately opened up a flood of criticism. Was it unnecessarily alarmist? Was the statement hyperbolic and exaggerated?

Technological innovation is moving at a lightning pace, bringing with it threats to democracy and our national security, and, for those who took the CAIS statement seriously, possibly a threat to human life and civilization itself.

How do we maintain hope amidst the threat of AI-driven human extinction? To do so, we must first take an honest and clear-eyed look at the threat and resist the impulse to ignore it or despair. Only by looking clearly at the possibility of devastation can we begin the process of reimagination necessary to sustain us. Hope in the face of an existential threat requires both a courageous alternative vision of the future along with the will to move toward that vision in concrete ways. Through imagining and pursuing technical, legal, institutional, and administrative mitigation, we enact hope.

The Threat

Emerging technologies are being weaponized against U.S. citizens and institutions, compromising critical infrastructure and sowing discord. To give an obviously concerning example, disinformation campaigns undermining faith in free and fair elections culminated in the Capitol attacks of January 6, 2021. Now, disinformation and misinformation can be produced and amplified at an exponential pace using generative AI like ChatGPT. On May 22, 2023, a falsified image purporting to show a Pentagon explosion circulated on social media. Shortly thereafter, the markets dipped and panic momentarily ensued before officials verified that the image was fake and likely AI-generated.2

In 2019, prior to the COVID-19 global pandemic, Stanford epidemiologist Stephen Luby predicted the possibility of human extinction or the potential for the collapse of civilization by 2100. Luby thought the collapse could come about in four different ways—climate change, pandemic, nuclear holocaust, and generative AI.3 How did he think we should move forward? Luby focused his attention on the role of universities, calling for a multidisciplinary approach to long-term problem-solving in order to ensure a thriving human society for generations to come.

Hope in Action

Strengthening our political system requires legal innovation, a point driven home by our Nation’s Founding. The Declaration of Independence and the U.S. Constitution represented radical innovation of legal instruments and were manifestations of translational research by the founders.4 The U.S. Constitution is the result of extensive research, particularly by James Madison, the architect of the Constitution and Bill of Rights.5 It required the translation and integration of a combination of disciplines, such as science, philosophy, history, commerce, and governance. For the founders, the interdisciplinary work of architecting founding legal documents was a manifestation of hope. Their hopeful vision could only become a reality through the work of legal innovation.

Such legal innovations became necessary in part because of the speed of technological innovation. In authoring Federalist Paper #11, Alexander Hamilton argued that adoption of the U.S. Constitution was necessary to protect commercial activity and what he described as the “adventurous spirit, which distinguishes the commercial character of America[.]”6 Protection of the pursuit of science and IP (Intellectual Property) is embedded within the text of the U.S. Constitution: Article I, Section 8: “To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries[.]”7

For generations, the United States led the globe in both legal and technological innovation. Indisputably, the U.S. is a global leader in emerging technologies. Although it currently exercises outsized power in tech progress globally, for the past decade, it has faltered in creating the legal innovations necessary to productively harness these advances in technology. The European Union (EU) has repeatedly stated that legal innovation must proceed in tandem with tech innovation to safeguard both economic growth and consumer protection. With the implementation of the EU’s General Data Protection Regulation in 2018 and now the introduction of the proposed AI Act and other forward-looking tech regulations, the EU is outpacing the U.S. in legal innovation.

States such as California, Texas, Illinois, and others are now attempting to fill the void of tech regulations. Most recently, Montana implemented state legislation banning Tik Tok from being downloaded in the state, effective January 1, 2024. This legal development has been criticized as unenforceable, yet reflects what some assess to be legal innovation by a single state in the U.S., claiming that legal innovation is necessary to protect our national security. Montana and other states contend that the U.S. has been downgraded as a democracy as its technological innovations have been weaponized against itself. Threats to U.S. critical infrastructure and cyberattacks are increasing. Disinformation campaigns, exacerbated by generative AI and deep fakes, undermine confidence in democratic governance and U.S. industry.

Generative AI poses perhaps an unprecedented threat to national security: (1) Technological architecture and design; (2) Legal architecture and design; and (3)
What is the relationship between the two? This question has occupied the minds of members of Congress and state legislatures, regulators and policymakers, academics, and industry leaders, especially in light of the transformative potential of ChatGPT.

Upon its introduction, China immediately announced that ChatGPT poses a grave threat to national security and immediately implemented legal methods to block its adoption. Walmart also immediately moved to block ChatGPT, as did Amazon and Microsoft. The companies explained that it posed unprecedented threats to cybersecurity.

One day before the Montana law banning Tik Tok was signed by Montana Governor Greg Gianforte, Sam Altman, founder and CEO of the company that created ChatGPT, was called to testify on the potential benefits and dangers of generative AI. Altman considered the possibility that the government should own and manage ChatGPT. Like the NASA space program, safeguarding important national security objectives would have been extraordinarily challenging if space technology was vested solely in the hands of private corporations. Altman’s testimony invited a conversation on the laws needed to place the proper guardrails around OpenAI technologies. Members of Congress posed this question: How do we craft laws that can incentivize socially responsible AI and mitigate the harms of dangerous and discriminatory AI? This is one of the key questions that both lawyers and technologists will try to answer and should try to answer together.

In the face of such existential threats, the hope we need must necessarily be a working hope, in which professionals’ expertise is married to a moral and ethical responsibility that compels them to do their particular work in a way that contributes not only to a good society today, but also lays the foundation for good to prevail in the face of the challenges we see on the horizon. To lay the groundwork for reinforcing democracy and national security, technology innovation requires not only insights from well-trained STEM researchers and technologists, but forward-looking perspectives from researchers and technologists who are engaged in a dialogue with industry, state and federal lawmakers and regulators, academics and researchers, and civil society organizations. Similarly, lawyers and legal scholars must be in dialogue with industry leaders and technologists, cybersecurity researchers, and members of the intelligence community, and the military community.

Hope As a Vocation

Hope as a vocation can be witnessed through a growing interest in Public Interest Technology, also known as Public­Tech, a multidisciplinary research effort to interrogate how to incentivize innovation that can best serve public outcomes. The New America’s Public Interest Technology—University Network (PIT-UN) is a prominent example. The PIT-UN network is comprised of over 50 colleges and universities globally. PIT-UN combines strategic funding from the Ford Foundation, Hewlett Foundation, MasterCard Impact Fund, The Raikes Foundation, Schmidt Futures and The Siegel Family Endowment. Bruce Schneier, cybersecurity pioneer and one of the co-signers of the CAIS statement explained: “Public-interest technologists are a diverse and interdisciplinary group of people. Their backgrounds are in technology, policy, or law. This is important, you do not need a computer-science degree to be a public-​interest technologist.”8

One sign of hope is that universities are increasingly elevating PublicTech leadership. Sylvester Johnson, a leader of PIT-UN, was recently appointed as Virginia Tech’s Associate Vice Provost for Public Interest Technology. Johnson is the former Assistant Vice Provost for the Humanities and a scholar on the intersection of technology, race, religion, and national security. In his new role, he has been tasked to create new methods and forms of collaboration for democracy, social justice, and sustainability in guiding research on our shared technological future. To say the future of humanity may depend upon interdisciplinary research developments and evolutions of university leadership that champion the importance of democracy and sustainability may sound insufficient when faced with ominous warnings about the possibility of human extinction; and yet, true hope is not a lofty ideal, but a choice to work together and work well in pursuit of the common good. Hope manifests in specific interventions and choices, and becomes larger than the sum as individuals enact their moral and professional responsibilities together.

 

Notes

  1. Kelvin Chan and Matt O’Brien, Artificial intelligence could one day cause human extinction, center for AI safety warns, USA Today ( May 30, 2023), usatoday.com/story/tech/news/2023/05/30/ai-may-cause-human-extinction-experts-warn/70269260007.
  2. Shannon Bond, Fake viral images of an explosion at the Pentagon were probably created by AI, NPR (May 22, 2023), npr.org/2023/05/22/1177590231/fake-viral-images-of-an-explosion-at-the-pentagon-were-probably-created-by-ai.
  3. Jody Berger, What’s likely to cause human extinction—and how can we avoid it?, Stanford Doerr School of Sustainability (Feb. 19, 2019), earth.stanford.edu/news/whats-likely-cause-human-extinction-and-how-can-we-avoid-it.
  4. Jed Purdy, A Tolerable Anarchy: Rebels, Reactionaries, and the Making of American Freedom (Vintage, 2010).
  5. Robert Morgan, James Madison on the Constitution and the Bill of Rights (Westport, CT: Greenwood Press, 1988).
  6. Alexander Hamilton, James Madison, & John Jay, Federalists Papers: A Collection of Essays, Written in Favour of the New Constitution, As Agreed Upon by the Federal Convention, September 17, 1787 (New York, NY: Fall River Press, reprinted 2021).
  7. Constitution of the United States, Article I, Section 8 (1787).
  8. Bruce Schneier, Public-Interest Technology Re­s­ources (Jan/Feb 2020), online database available at public-interest-tech.com, last updated May 30, 2022.
Hu

Margaret Hu is a professor of law at William and Mary. Her research focuses on the intersection of immigration policy, national security, cybersurveillance, and civil rights.