The increasing use of large language models (LLMs) has gotten significant attention due to their advanced capabilities in natural language processing, helping with different tasks such as text generation, translation, and summarization. The use of this AI technology differs from educational to professional settings. However, researchers have voiced their concerns about their potential to generate biased outputs, compromise information privacy, and misuse sensitive data. Privacy concerns are crucial in understanding user interactions with LLMs. The IUIPC model by Malhotra et al. (2004) has been extensively used to study privacy concerns and their impact on risk beliefs, primarily in internet usage contexts. Attitude is another common consequence of privacy concerns frequently studied together with risk beliefs. This study applied the IUIPC model to LLMs to explore these relationships further. The study identified a gap in exploring how AI literacy moderates and critical thinking mediates the relationship between privacy concerns, risk beliefs and attitudes toward LLMs. AI literacy encompasses understanding AI capabilities, limitations, and ethical considerations, which is crucial for responsible use of LLMs. This research aims to fill these gaps by examining these moderating and mediating effects using a quantitative survey, providing insights into privacy concerns in the context of LLMs. The findings from this study indicate that privacy concerns influence both risk beliefs and attitudes toward LLMs. Privacy concerns positively impact risk beliefs, suggesting that individuals with higher privacy concerns perceive greater risks when using LLMs. However, privacy concerns have a weak negative correlation with attitudes toward LLMs, though this relationship was not statistically significant. Critical thinking was found to partially mediate the relationship between privacy concerns and attitude but did not mediate the relationship between privacy concerns and risk beliefs. Additionally, AI literacy was tested as a moderator, but only two out of the three subscales, critical appraisal and practical application, significantly moderated the relationship between privacy concerns and attitudes, managing the negative impact of privacy concerns on attitudes toward LLMs. Technical understanding did not significantly moderate the relationship between privacy concerns and attitude. None of the subscales of AI literacy moderated the relationship between privacy concerns and risk beliefs.

dr. Vivian Chen
hdl.handle.net/2105/75056
Media & Business
Erasmus School of History, Culture and Communication

Schoemaker, Loïs. (2024, January 10). Navigating privacy concerns in the age of large language models: The roles of AI Literacy and critical thinking. Media & Business. Retrieved from http://hdl.handle.net/2105/75056