Abstract
Political bias in Large Language Models (LLMs) presents a growing concern for the responsible deployment of AI systems. Traditional audits often attempt to locate a model’s political position as a point estimate, masking the broader set of ideological boundaries that shape what a model is willing or unwilling to say. In this paper, we draw upon the concept of the Overton Window as a framework for mapping these boundaries: the range of political views that a given LLM will espouse, remain neutral on, or refuse to endorse. To uncover these windows, we applied an auditing-based methodology, called PRISM, that probes LLMs through task-driven prompts designed to elicit political stances indirectly. Using the Political Compass Test, we evaluated twenty-eight LLMs from eight providers to reveal their distinct Overton Windows. While many models default to economically left and socially liberal positions, we show that their willingness to express or reject certain positions varies considerably, where DeepSeek models tend to be very restrictive in what they will discuss and Gemini models tend to be most expansive. Our findings demonstrate that Overton Windows offer a richer, more nuanced view of political bias in LLMs and provide a new lens for auditing their normative boundaries.
| Original language | English |
|---|---|
| Number of pages | 7 |
| Publication status | Accepted/In press - 20 Aug 2025 |
| Event | The 2025 Conference on Empirical Methods in Natural Language Processing - Suzhou, China Duration: 5 Nov 2025 → 9 Nov 2025 https://2025.emnlp.org |
Conference
| Conference | The 2025 Conference on Empirical Methods in Natural Language Processing |
|---|---|
| Country/Territory | China |
| City | Suzhou |
| Period | 5/11/25 → 9/11/25 |
| Internet address |
Keywords
- large language model
- political bias
- decision making