2024-01-10
Varying transparency in AI chatbots: Implications for perceived trust, perceived expertise, and behavioral intention.
Publication
Publication
Varying the transparency levels of AI chatbots through explainability and accountability. Observing implications for behavioral intention and the possible moderating effect of perceived trust and perceived expertise.
The rapid expansion of Artificial Intelligence (AI) has significantly transformed various sectors, including banking, where AI chatbots are revolutionizing service delivery. Despite their increasing adoption, comprehensive research on how transparency influences perceived trust and expertise, and how these factors affect users' behavioral intentions to use financial services, remains limited. This study aims to fill this gap by examining these relationships in the context of AI banking chatbots. Transparency in AI systems involves clearly communicating the AI’s capabilities, limitations, and decision-making processes. A key aspect of transparency is explainability, which refers to the AI's ability to articulate its actions and decisions in a way that users can understand. Accountability is another crucial component, ensuring that AI systems are responsible for their actions by providing mechanisms for feedback, issue reporting, and redress. Perceived trust is defined as the user's belief in the AI's reliability and beneficial intent, while perceived expertise reflects confidence in the chatbot's competence. Behavioral intention refers to the likelihood that a user will engage with services suggested by the chatbot. This study employed a quantitative approach, gathering data from 273 participants through a survey that measured explainability, accountability, perceived trust, perceived expertise, and behavioral intention. Data analysis included linear regression analyses, ANOVAs, moderation analyses and mediation analyses. Key findings revealed that perceived expertise was a strong predictor of behavioral intention, while perceived trust also significantly influenced behavioral intention. However, the mediation analysis indicated that perceived trust does not mediate the relationship between perceived expertise and behavioral intention. 3 Interestingly, the moderation analysis showed that perceived trust does not moderate the relationship between explainability / accountability and behavioral intention. Instead, perceived expertise directly influenced behavioral intention, independent of perceived trust. These findings underscore the critical role of perceived expertise in driving user adoption of AI chatbots in the financial sector. The study contributes to the existing literature by highlighting the importance of perceived expertise and trust in influencing behavioral intention, while challenging the expected mediation role of trust. For practical applications, these insights can guide banking institutions in designing and implementing AI chatbots that enhance user engagement through demonstrating high expertise and building trust.
Additional Metadata | |
---|---|
dr.ir. Niels Vink | |
hdl.handle.net/2105/74969 | |
Media & Business | |
Organisation | Erasmus School of History, Culture and Communication |
Lopez--Heurtin, Tasio. (2024, January 10). Varying transparency in AI chatbots: Implications for perceived trust, perceived expertise, and behavioral intention.. Media & Business. Retrieved from http://hdl.handle.net/2105/74969
|