The pervasive influence of media on public perceptions of critical societal institutions, particularly law enforcement, forms a significant area of academic research. Existing literature offers a comprehensive understanding of how human-generated media shapes attitudes towards police, evolving from broad observations to detailed analyses of specific formats and audience roles. However, an emerging and crucial dimension - how Large Language Models (LLMs) generate text on police officers - remains unexplored. LLMs, trained on vast textual datasets, inherit and can propagate social biases present in their training data, raising concerns that AI models might capture, reproduce, and even intensify existing stereotypes when generating content about police officers. Building on cultivation theory, which posits that the long-term impact of consistent media exposure on social reality perceptions, and social location theory, which highlights how an audience's background shapes media interpretation, this thesis aims to study and compare language generated by humans and AI-generated text regarding police officers. Specifically, it investigates how biases from traditional media and digital media translate into both human perception and AI models. To address this, a mixed-method exploratory design was employed, utilising both qualitative and quantitative data collection and analysis techniques. Human participants completed a survey involving four open-ended prompts designed to elicit perceptions of neighbourhood crime and police, police-civilian communication, trust and authority, and media portrayals of the police. Concurrently, AI models, specifically GPT-4 and Gemini, were prompted with the same scenarios to generate comparable textual data. The collected textual data underwent qualitative thematic analysis to identify patterns and meanings, complemented by quantitative sentiment analysis to compare sentiment distributions. This comparison of human and AI-generated narratives revealed a significant divergence in sentiment concerning police trust and authority, with AI models often articulating a more critical stance than human respondents. This outcome suggests that LLMs may not only reflect but also potentially reinforce existing societal biases embedded within their training data.

Joao Fernando Ferreira Goncalves
hdl.handle.net/2105/76773
Digitalisation, Surveillance & Societies
Erasmus School of History, Culture and Communication

Mieke Dohmen. (2025, October 10). Digital Blue: Investigating Sentiment and Bias in Human and AI narratives on Police Officers. Digitalisation, Surveillance & Societies. Retrieved from http://hdl.handle.net/2105/76773