Future historians may well mark the second half of March 2023 as the moment when the era of artificial intelligence truly began. In the space of just two weeks, the world witnessed the launch of GPT-4, Bard, Claude, Midjourney V5, Security Copilot, and many other AI tools that have surpassed almost everyone’s expectations. These new AI models’ apparent sophistication has beaten most experts’ predictions by a decade.
For centuries, breakthrough innovations – from the invention of the printing press and the steam engine to the rise of air travel and the internet – have propelled economic development, expanded access to information, and vastly improved health care and other essential services. But such transformative developments have also had negative implications, and the rapid deployment of AI tools will be no different.
AI can perform tasks that individuals are loathe to do. It can also deliver education and health care to millions of people who are neglected under existing frameworks. And it can greatly enhance research and development, potentially ushering in a new golden age of innovation. But it also can supercharge the production and dissemination of fake news; displace human labor on a large scale; and create dangerous, disruptive tools that are potentially inimical to our very existence.
Specifically, many believe that the arrival of artificial general intelligence (AGI) – an AI that can teach itself to perform any cognitive task that humans can do – will pose an existential threat to humanity. A carelessly designed AGI (or one governed by unknown “black box” processes) could carry out its tasks in ways that compromise fundamental elements of our humanity. After that, what it means to be human could come to be mediated by AGI.
Clearly, AI and other emerging technologies call for better governance, especially at the global level. But diplomats and international policymakers have historically treated technology as a “sectoral” matter best left to energy, finance, or defense ministries – a myopic perspective that is reminiscent of how, until recently, climate governance was viewed as the exclusive preserve of scientific and technical experts. Now, with climate debates commanding center stage, climate governance is seen as a superordinate domain that comprises many others, including foreign policy. Accordingly, today’s governance architecture aims to reflect the global nature of the issue, with all its nuances and complexities.
As discussions at the G7’s recent summit in Hiroshima suggest, technological governance will require a similar approach. After all, AI and other emerging technologies will dramatically change the sources, distribution, and projection of power around the world. They will allow for novel offensive and defensive capabilities, and create entirely new domains for collision, contest, and conflict – including in cyberspace and outer space. And they will determine what we consume, inevitably concentrating the returns from economic growth in some regions, industries, and firms, while depriving others of similar opportunities and capabilities.
Importantly, technologies such as AI will have a substantial impact on fundamental rights and freedoms, our relationships, the issues we care about, and even our most dearly held beliefs. With its feedback loops and reliance on our own data, AI models will exacerbate existing biases and strain many countries’ already tenuous social contracts.
That means our response must include numerous international accords. For example, ideally we would forge new agreements (at the level of the United Nations) to limit the use of certain technologies on the battlefield. A treaty banning lethal autonomous weapons outright would be a good start; agreements to regulate cyberspace – especially offensive actions conducted by autonomous bots – will also be necessary.
New trade regulations are also imperative. Unfettered exports of certain technologies can give governments powerful tools to suppress dissent and radically augment their military capabilities. Moreover, we still need to do a much better job of ensuring a level playing field in the digital economy, including through appropriate taxation of such activities.
As G7 leaders already seem to recognize, with the stability of open societies possibly at stake, it is in democratic countries’ interest to develop a common approach to AI regulation. Governments are now acquiring unprecedented abilities to manufacture consent and manipulate opinion. When combined with massive surveillance systems, the analytical power of advanced AI tools can create technological leviathans: all-knowing states and corporations with the power to shape citizen behavior and repress it, if necessary, within and across borders. It is important not only to support UNESCO’s efforts to create a global framework for AI ethics, but also to push for a global Charter of Digital Rights.
The thematic focus of tech diplomacy implies the need for new strategies of engagement with emerging powers. For example, how Western economies approach their partnerships with the world’s largest democracy, India, could make or break the success of such diplomacy. India’s economy will probably be the world’s third largest (after the United States and China) by 2028. Its growth has been extraordinary, much of it reflecting prowess in information technology and the digital economy. More to the point, India’s views on emerging technologies matter immensely. How it regulates and supports advances in AI will determine how billions of people use it.
Engaging with India is a priority for both the US and the European Union, as evidenced by the recent US-India Initiative on Critical and Emerging Technology (iCET) and the EU-India Trade and Technology Council, which met in Brussels this month. But ensuring that these efforts succeed will require a reasonable accommodation of cultural and economic contexts and interests. Appreciating such nuances will help us achieve a prosperous and secure digital future. The alternative is an AI-generated free for all.
Project Syndicate