The Threat of Persuasive AI
In the not-so-distant future, generative artificial intelligence could power new systems that can persuade on behalf of any person or entity. Malicious attempts at social engineering would surely follow, and efforts to label AI interfaces would do little to safeguard individuals against manipulation.
LONDON – What does it take to change a person’s mind? As generative artificial intelligence becomes more embedded in customer-facing systems – think of human-like phone calls or online chatbots – it is an ethical question that needs to be addressed widely.
The capacity to change minds through reasoned discourse is at the heart of democracy. Clear and effective communication forms the foundation of deliberation and persuasion, which are essential to resolve competing interests. But there is a dark side to persuasion: false motives, lies, and cognitive manipulation – malicious behavior that AI could facilitate.
In the not-so-distant future, generative AI could enable the creation of new user interfaces that can persuade on behalf of any person or entity with the means to establish such a system. Leveraging private knowledge bases, these specialized models would offer different truths that compete based on their ability to generate convincing responses for a target group – an AI for each ideology. A wave of AI-assisted social engineering would surely follow, with escalating competition making it easier and cheaper for bad actors to spread disinformation and perpetrate scams.