kuziemski3_DAVID MCNEWAFP via Getty Images_artificialintelligncefacialrecognition David McNew/AFP via Getty Images

The False Promise of “Ethical AI”

Responding to growing demands for more accountability in the development and deployment of artificial intelligence, public policymakers have signed on to the fashionable push for "ethical AI." Yet by adopting what amounts to a euphemism for inaction, they are playing directly into the industry's hands.

WARSAWThe use of algorithms “in the wild” to measure, quantify, and optimize every aspect of our lives has led to growing public concerns and increased attention from regulators. But among the list of responses are some impractical ideas, not least those being promoted under the banner of “ethical AI.”

It is understandable that public authorities would want to mitigate the downsides of certain applications of artificial intelligence, particularly those associated with increased surveillance, discrimination against minorities, and wrongful administrative decisions. But cash-strapped governments are also eager to embrace any technology that can deliver efficiency gains in the provision of public services, law enforcement, and other tasks. The stalemate between these two priorities has shifted the debate away from law and policymaking, and toward the promotion of voluntary best practices and ethical standards within the industry.

So far, this push, which has been championed by public bodies as diverse as the European Commission and the US Department of Defense, revolves around the concept of “algorithmic fairness.”The idea is that imperfect human judgment can be countered, and social disputes resolved, through automated decision-making systems in which the inputs (data sets) and processes (algorithms) are optimized to reflect certain vaguely defined values, such as “fairness” or “trustworthiness.” In other words, the emphasis is placed not on politics, but on fine-tuning the machinery, either by debiasing existing data sets or creating new ones.

Masking deeper political conflicts behind a shroud of technologically mediated objectivity is not new. But, with the pace of automation accelerating, it is not a practice we can ignore. The more that policymakers focus on promoting voluntary AI ethics, the more likely they are to respond in ways that are distracting, debilitating, and undemocratic.

Consider the risk of distraction. Ethical AI implies that a geographically agnostic set of best practices can be devised and then replicated across a broad array of settings. Perhaps through some multilateral forum, an ostensibly diverse group of experts would convene to prepare global guidelines for developing and governing AI in an ethical manner. What would they come up with?

The Berkman Klein Center for Internet and Society at Harvard University recently published a review of around 50 “AI principles” that have emerged from the broader debate so far. The authors find that the conversation tends to converge around eight themes – privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. Notably missing are the two elephants in the room: structural power asymmetries and the looming climate catastrophe. These omissions indicate that the experts needed to provide a framework for ethical AI are operating on a relatively closed circuit, ultimately struggling to provide advice that captures the zeitgeist.

SUMMER SALE: Save 40% on all new Digital or Digital Plus subscriptions
PS_Sales_Summer_1333x1000_V1

SUMMER SALE: Save 40% on all new Digital or Digital Plus subscriptions

Subscribe now to gain greater access to Project Syndicate – including every commentary and our entire On Point suite of subscriber-exclusive content – starting at just $49.99

Subscribe Now

The quest for ethical AI also could erode the public sector’s own regulatory capacity. For example, after trying to weigh in with a High-Level Expert Group on AI, the European Commission has met with widespread criticism, suggesting that it may have undermined its own credibility to lead on the issue. Even one of the group’s own members, Thomas Metzinger of the University of Mainz, has dismissed its final proposed guidelines as an example of “ethical white-washing.” Similarly, the advocacy group AlgorithmWatch has questioned whether “trustworthy AI” should even be an objective, given that it is unclear who – or what – should be authorized to define such things.

Here, we can see how well-intentioned efforts to dictate the course of AI development might backfire. As long as the conversation remains vague, few can object to the idea of trustworthiness in principle. But the situation quickly changes once policymakers start overriding decades of accomplishments in science, technology, and society (STS), responsible research and innovation (RRI), human-computer interaction (HCI), and related fields. In the end, there is no singular list of recommendations that will be applicable across multiple levels of governance, for the simple reason that ethical AI is not an actionable policy goal in the first place.

Finally, devoting too much attention to AI ethics risks shifting the discourse away from more democratic forms of control. Owing to the corporate capture of many multilateral and higher-education institutions, a narrative of AI exceptionalism has prevailed. The argument that AI requires a unique set of standards forces the conversation from public arenas into closed redoubts of expertise, privilege, and global technocracy.

For the private sector, this narrative kills two birds with one stone: while avoiding any change to the lax regulatory status quo, tech companies can present themselves as socially responsible. For the general public, though, the consequences are less appealing. The false assumption that there is already a value consensus around AI pre-empts political contestation – the very essence of democracy – and eventually exacerbates social tensions and further erodes trust in government. Once ethical best practices have been formally instituted, automated systems will carry the imprimatur of objective knowledge, even though they will not be subject to any meaningful oversight. Their decisions will have law-like effects, leaving little room for nuance, context, or redress.

With the start of a new decade, it is clear that AI policymaking needs to move beyond the platitudes of voluntary ethical frameworks, and toward granular, context-specific regulatory and enforcement instruments that have been legitimized by democratic processes. As Frank Pasquale of the University of Maryland explains, algorithmic accountability comes in two waves, with the first focused on improving existing systems, and the second finally posing fundamental questions about power and governance.

Looking ahead, policymakers should abandon the narrative of AI exceptionalism, and start drawing on lessons from other instances of technological adoption and diffusion. They also need to strengthen the enforcement of existing laws, while allowing various civil-society groups and other stakeholders to reframe the issue around the value conflicts that have hitherto been kept at bay. The closed circle of newly minted AI ethicists, one hopes, will be broken to make room for those most affected by the disruptive processes of AI and automation: the end users and citizens whom governments have a duty to protect and serve.

https://prosyn.org/83jP9N1