Abstract
Conversational AI systems, powered by advanced Large Language Models, have rapidly developed human-like persuasion capabilities that raise concerns about psychological manipulation. This provocation examines the ethical problems that arise when these systems exploit cognitive biases and social compliance mechanisms during interactions with users. Building on established theoretical work and recent empirical research, we identify a particularly concerning pattern: the recognition-behaviour gap, where users consciously identify manipulative strategies yet fail to protect themselves accordingly. Current ethical frameworks fall short in addressing these sophisticated risks in conversational contexts. We propose a targeted ethical framework for AI governance centred on four key dimensions: preserving user autonomy, enhancing transparency, systematically monitoring for vulnerabilities, and implementing contextual safeguards. This paper confronts these ethical challenges directly and calls for practical protective measures to safeguard user autonomy as conversational AI becomes increasingly prevalent in everyday life.
Original language | English |
---|---|
Title of host publication | CUI '25: Proceedings of the 7th ACM Conference on Conversational User Interfaces |
Editors | Jaisie Sin, Edith Law, Jim Wallace, Cosmin Munteanu, Danai Korre |
Pages | 1-6 |
ISBN (Electronic) | 979-8-4007-1527-3 |
DOIs | |
Publication status | Published - 7 Jul 2025 |
Event | 7th ACM Conference on Conversational User Interfaces - Waterloo, Canada Duration: 8 Jul 2025 → 10 Jul 2025 |
Conference
Conference | 7th ACM Conference on Conversational User Interfaces |
---|---|
Country/Territory | Canada |
City | Waterloo |
Period | 8/07/25 → 10/07/25 |
Keywords
- Conversational Systems
- Psychological Manipulation
- Ethical AI
- User Autonomy
- Recognition-Behaviour Gap
- Responsible AI Governance
- Cognitive Bias
- Ethical Frameworks