op_lucero1_TanaonteGetty Images_USChinachip Tanaonte/Getty Images
en English

Managing the Sino-American AI Race

The official Sino-American dialogue on AI governance will continue to face serious political and institutional constraints that will limit what is possible. But much more could be achieved through unofficial channels that connect experts from across both societies.

NEW HAVEN – Central to the Cold War between the United States and the Soviet Union was a rivalry to develop the technologies of the future. First came the race to deploy nuclear weapons on intercontinental missiles. Then came the space race. Then came US President Ronald Reagan’s “Star Wars” program, which seemed to launch a new race to build missile-defense systems. But it soon became clear that the Soviet economy had fallen decisively behind.

Now, a new struggle for technological mastery is underway, this time between the US and China, over artificial intelligence. Both have signaled that they want to manage their competition through dialogue over the development, deployment, and governance of AI. But formal talks on May 14 made it painfully clear that no grand bargain can be expected anytime soon.

That should come as no surprise. The issue is simply too broad – and governments’ perspectives and goals too different – to allow for any single “treaty” or agreement on transnational AI governance. Instead, the potential risks can and should be managed through multiple, targeted bargains and a combination of official and unofficial dialogues.

In It to Win It

China and the US are each fully engaged in policymaking to shape the future of AI, both domestically and internationally. US President Joe Biden’s October 2023 executive order required US government agencies to step up their own use of AI, and to update how they regulate the use of AI in their respective sectors. Similarly, China’s central government has repeatedly signaled the importance of AI development, and the Cyberspace Administration of China (CAC) has issued stringent regulations on the use of algorithms, deepfakes, and AI-generated content.

As for shaping AI governance for the rest of the world, the US has already established multiple globalpartnerships focused on AI governance, and it led the drafting of a UN General Assembly resolution on “safe, secure, and trustworthy artificial intelligence systems for sustainable development.” Similarly, China announced a Global AI Governance Initiative in 2023 and now hosts an annual World AI Conference in Shanghai. With this year’s “Shanghai Declaration,” it unveiled additional plans to shape transnational AI governance. And not to be outdone by the US, China is co-sponsoring a resolution at the UN titled “Enhancing International Cooperation on Capacity-building of Artificial Intelligence,” which focuses on helping developing countries pursue AI in a “non-discriminatory” environment.

The US and China each recognize the importance not only of engaging in dialogue with each other, but also of being seen by the rest of the world to be doing so. The bilateral talks in May demonstrated that both countries will continue to pay lip service to dialogue despite their obvious rivalry. The US highlighted the importance of developing “safe, secure, and trustworthy” systems, and identified potential instances of abuse by China. The Chinese stated that AI development should be “beneficial, safe, and fair,” highlighted the UN’s role in global AI governance, and objected to US export controls.

But given all the attention that the US and China have devoted to AI governance and dialogue, why are their official statements so lukewarm? More to the point, why is it so hard to tackle real issues and come to an actual, substantive agreement? The answer can be found in each country’s domestic approach to AI governance, and how these domestic contexts affect the international dialogue.

The American Way

China and the US have starkly different views on what “AI governance” means, and on what “AI dialogue” entails and should aim to accomplish. In the US, governance is distributed by sector and generally focuses on addressing specific AI-related harms. This is partly due to normative policy goals like supporting innovation and avoiding excessive regulation; but it also reflects constitutional and practical limits on what the US government can actually do to regulate AI. Hence, Biden’s executive order instructs federal agencies to focus more on AI but does not seek to regulate the technology’s private use.

The administration likely determined that it lacks the authority to issue regulations on the use of AI by private actors. But Congress’s authority to regulate AI also faces challenges. A general AI law, like the one the European Union recently adopted, probably would be too broad to get through the House of Representatives and the Senate, and it would surely face legal challenges if it did. The Supreme Court’s decisions in Murthy v. Missouri (2024) and Moody v. NetChoice (2024), lend credence to the idea that code – including algorithm-based content moderation – qualifies as constitutionally protected speech in American jurisprudence, implying that the bar would be quite high for regulatory intrusion.

In practice, most AI governance in the US falls to sector-specific regulators – such as the Food and Drug Administration, with its rules on AI-assisted medical products. One exception is in the national-security context; the US government has broad authority to regulate the use of AI for military purposes, and – arguably – to impose export controls on advanced semiconductors in order to limit China’s ability to develop its own military AI. The White House and the federal government thus participate in multi-stakeholder discussions about AI risks, and influence the practical development of AI by setting policy goals and promoting collaborative, voluntary principles and standards.

The US government’s approach to AI dialogue is similarly focused on concrete perceived risks that it can directly manage or regulate. Thus, the US delegation in May was led by the Special Assistant to the President and Senior Director for Technology and National Security, Tarun Chhabra, and the Special Envoy for Critical and Emerging Technology for the Department of State, Seth Center. Both focus primarily on policies relating to emerging technologies (rather than on US-China relations per se).

When the US government engages China and others on AI, its objectives are to develop voluntary general standards and principles, to articulate policy goals and values, and to identify specific military and national-security risks such as autonomous weapons, AI-powered biologicalwarfare, and cutting-edge hardware and software that is falling into the hands of nonstate actors.

The hope, then, is to find common ground on a joint policy direction or vision, or even to secure concrete agreements addressing specific risks and objectives. The Americans see dialogue as primarily about perceived threats, and not about other areas of US-China relations. During the May meeting, US officials raised concerns only about China’s actual and potential misuses of AI, and stressed the importance of maintaining open lines of communication.

The Chinese Way

The Chinese government’s approach to AI governance and dialogue is very different, not least because its primary concern is about politics, narrative control, and power, rather than the technology itself. Nonetheless, Chinese regulators face many of the same practical challenges as their US counterparts when it comes to creating AI guardrails. Moreover, China also has a largely distributed, sector-specific approach to regulating the use of AI in different contexts, and also draws on input from experts in academia and the private sector.

And yet the only hard national-level regulations on AI (so far) were issued by the CAC, and they focus primarily on content control rather than the management of specific, concrete risks. The CAC rules require AI models to adhere to Communist Party of China narratives, thus placing expansive, though vague, requirements on technology companies, platforms, model developers, and anyone else who intends to use AI in a public-facing way. The CAC requires that all output from large language models (like ChatGPT in the US) conform to “socialist values” and CPC positions on sensitive topics, and it has even issued its own chatbot based on Xi Jinping Thought.

Two new national institutions will shape the governance of AI and other technologies. The National Data Bureau will seek to leverage the value of China’s massive, but siloed, collections of data, and to regulate private and public uses of data, while the Central Science and Technology Commission will oversee the mobilization of national resources for developing AI and other emerging technologies.

Although both organizations were formally established in 2023, details about their operations remain sparse. But we do know that the Data Bureau will be led by Liu Liehong, and the Commission by Ding Xuexiang, one of President Xi Jinping’s chief lieutenants. Both officials are quite senior and have close connections to the CPC leadership.

This approach will affect AI governance across China. Although China boasts increasingly sophisticated national regulators with cutting-edge expertise, as well as specialized internet courts staffed by some of the world’s best-trained jurists, its AI-regulation regime remains vague and subject to shifting political narratives and courts or agencies with limited authority. While AI governance in China is complex and widely distributed, everyone must respect the party leadership’s “discourse power” – meaning the prerogative to lead discussions on AI governance.

Politics First

These domestic dynamics naturally influence China’s approach to international dialogue as well. Here, too, politics comes first. From the Chinese perspective, the May talks were first and foremost about US-China relations, and only secondarily about AI governance. International talks on AI are too important for CPC leaders to cede to technical experts, CEOs, or anyone who is not directly answerable to them. Since the government’s harsh “crackdown” on the domestic tech sector in 2021 and 2022, AI experts, particularly tech company CEOs, have had only limited “discourse power.”

PS Events: Climate Week NYC 2024
image (24)

PS Events: Climate Week NYC 2024

Project Syndicate is returning to Climate Week NYC with an even more expansive program. Join us live on September 22 as we welcome speakers from around the world at our studio in Manhattan to address critical dimensions of the climate debate.

Register Now

Few now dare to say anything that conflicts with national policy. Unlike OpenAI’s Sam Altman or Elon Musk, leading entrepreneurs in China, such as Alibaba co-founder Jack Ma, cannot travel around the world calling for different kinds of AI governance. Chinese developers, academics, private experts, and regulators still debate each other constantly (if not publicly) about the best approaches; but the top political leadership has other priorities for international dialogue, which is not led by technology authorities, as in the US, but by the foreign ministry’s Department of North American and Oceanian Affairs.

Many of China’s AI narratives echo those of the US and international organizations. For example, at the recent World AI Conference in Shanghai, Premier Li Qiang emphasized China’s willingness to work with the rest of the world, deepen innovative cooperation, promote inclusive development, and strengthen collaborative governance. But China criticizes what it sees as US efforts to limit its capacity to develop AI technologies (via controls on semiconductor exports and proposed investment restrictions). In its recent Shanghai Declaration on Global AI Governance, China’s foreign ministry highlights, among other things, the “right of all countries to independent development … based on their own national conditions.”

Though it does not name the US directly, the declaration was likely aimed at criticizing the US and highlighting how China’s own approach to transnational AI governance is different. In the May dialogue, China resisted US efforts to separate the issue of perceived AI risks from export controls and other aspects of US-China relations. Many Chinese experts on AI governance view America’s negotiating strategy as disingenuous – or as a gambit to lock China into second place. They see the potential risks as rather abstract or distant, whereas export controls and other limitations are inflicting concrete harm on China’s AI industry right now.

Of course, both countries are concerned about certain risks, such as from AI-driven decision-making on military matters (including nuclear weapons). But given its own domestic goals and perspective on the purpose of AI governance, the Chinese government does not necessarily see these concerns as more urgent or even separate from explicitly political goals. While official dialogue will continue, it will be difficult for both sides to realize their primary objectives.

Limits and Alternatives

Neither the US nor China is going to change its institutions or fundamental goals for AI governance anytime soon. US anxiety about China’s potential abuse of AI will likely remain, as will its export controls and investment restrictions. China can no longer rely on American business interlocutors to water down or prevent limits on economic and technological exchange between the two countries. Moreover, China has become a less attractive market for US investors, venture capitalists, and tech companies; all are increasingly hesitant to be seen as working with the Chinese.

China’s own “politics first” approach and suspicion of US intentions will also remain. Thus, when it comes to entering specific agreements with the US (such as on developing rules for autonomous weapons or the use of AI in cybersecurity), China’s contemporary perception of US-China relations will determine what is possible.

Notwithstanding these challenges, “track-two talks” among non-government officials still have much potential. After all, much of what makes AI such a difficult and sometimes nakedly political topic also makes it amenable to different kinds of dialogue. Transnational AI governance cannot just be about agreements between governments; it also must involve substantive forms of collaboration between whole societies. The Sino-American AI dialogue is bigger than the two governments and should involve interactions between not just politicians but also regulators, academics, civil society, and private-sector experts.

Official talks would also benefit from including more government agencies. The May meeting did make room alongside foreign-affairs officials for agencies that actually govern AI. But more substantive discussions would be possible if regulators had greater opportunities to meet with their counterparts.

Moreover, competition on international AI governance should not be framed as a wholly bad thing, especially if both countries follow through on offering benefits beyond their borders, such as by helping developing countries build their own capacity to leverage AI.

China is already providing public resources to help others – including private companies – develop AI tools. Notable examples include the CAC’s basic (Chinese) corpus to help train LLMs and the Shanghai AI Lab’s GenAD, a video-generation model that can help developers train autonomous vehicles. At the same time, many US companies have developed open-source foundation models that are available for users around the world, including in China. This kind of continued competition could make AI resources more affordable and widely available globally.

Take Two

Because track-two dialogues include academics, private companies, think tanks, civil-society organizations, and others who are committed to sharing best practices and building trust in a specific domain, they can do most of the heavy lifting when it comes to addressing specific AI-governance challenges. While discussions often start in closed settings, the big takeaways usually inform policymaking processes in both countries. Track-two talks thus are helpful, and often necessary, in preparing the ground for agreements between governments.

AI is frequently compared to nuclear technology, which has long been subject to international agreements. But while both have great transformative potential, AI is far more distributed across government actors and society. Even if AI policy goals can be decided at the highest levels of government, the work of implementation is far more complex.

For example, unlike nuclear weapons, the president is not going to approve every use of a drone or similar piece of technology. To have an agreement on lethal autonomous weapons, the US and China must not only agree with each other on basic principles; they also must understand how such an agreement will be implemented in each countries’ respective militaries. Track-two talks provide the opportunity to gain more granular understandings of such questions.

A recent, relatively successful example of this is China’s crackdown on fentanyl factories. Biden raised this issue with Xi last year, and following a high-level political agreement, track-two dialogues and substantive law-enforcement cooperation and intelligence-sharing between US and Chinese agencies have begun to yield results. While neither country is going to change its policies or ambitions with respect to AI, track-two talks can help manage, if not contain, the competition between them.

Dialogue, both official and unofficial, also may increase in the face of crises or AI risks materializing. After the Cuban missile crisis, the US and the Soviet Union famously established a hotline at the highest level to prevent unwanted escalation. With AI already being deployed in warfare, and with tensions rising in the South China Sea and across the Taiwan Strait, Chinese and US leaders have ample grounds to do the same.

At the same time, both should accept that their goals and preferences will remain at odds. This is only natural, given their radically different political systems and values. Obviously, the US should not try to harmonize its policies on misinformation and disinformation with those of China, nor should it expect China to adopt Western policies. But different goals in some areas need not derail the possibility of constructive dialogue in others, such as to ensure that humans are in charge of deciding to launch nuclear weapons.

Finally, both governments and participants in track-two talks should recognize their limits. Countries, like people, are political animals. Dialogue about AI between China and the US cannot resolve the two countries’ geopolitical rifts, such as disagreements over Taiwan or their increasingly contentious bilateral economic relations. The goal of engagement should be to solve a specific problem related to a particular AI use.

The official dialogue between the US and China will continue to face serious political and institutional constraints, limiting what is possible. Much more can be achieved through unofficial channels that connect experts from across both societies. At the very least, we can gain a better understanding of each other’s institutions and their purposes, as well as develop the infrastructure to act if hypothetical scenarios become real.

https://prosyn.org/dREtC8m