April 02, 2025
Beyond the code: Why algorithm transparency matters for your AI chatbot

Our study, published in Public Relations Review in December, explores the effectiveness of AI-algorithm transparency signaling as a means to improve organization-public relationships (OPRs) within AI-assisted communications.
The research examines whether transparency in AI algorithms influences users' trust in AI systems, and whether this trust subsequently transfers to the parent company, ultimately affecting relational satisfaction with the company. An online experiment with 537 participants revealed that transparency signaling significantly boosts users’ relational satisfaction with the company's AI platform. The research builds on signaling theory and trust transfer theory.
However, this impact is mediated by trust in both the AI system and the parent company, rather than a direct link to relational satisfaction. Our findings provide practical recommendations for AI domain experts and public relations professionals to effectively highlight transparency in AI-mediated communication and keep accountability, enhancing public relations outcomes.
When chatbots meet the public
From ChatGPT to virtual assistants in our banking apps, AI chatbots now handle a spectrum of our day-to-day interactions. But can we trust these digital helpers? And what does “transparency” even look like when most of us can’t read code?
The transparency signaling advantage
Our research shows that people don’t need to see the entire codebase to trust an AI system. Instead, it’s about organizations signaling that they have nothing to hide. For instance:
- Explaining, in plain language, how the chatbot pulls its data or why it might provide partial or incorrect answers
- Acknowledging potential errors or limitations (i.e., “This system may generate inaccurate answers at times”)
Trust transfer
Interestingly, once people come to trust the chatbot, they often extend that trust to the parent company behind it. By focusing on responsible and transparent communication about how AI works, corporations can enhance long-term goodwill, which is critical in an era full of concerns about data privacy and algorithmic bias.
This matters ...
-
For organizations: Proactively disclosing how your AI decides can help avoid public backlash when the system (inevitably) makes errors.
-
For users: Feeling secure about a chatbot’s reliability can reduce anxiety and frustration, making technology adoption smoother.
-
For society: Greater accountability in AI helps level the playing field, ensuring that advanced technologies benefit as many people as possible, while minimizing harm.
Bottom line
Effective algorithm transparency is less about dumping complicated code and more about deliberate signals to show you’re responsible, trustworthy, and proactive in acknowledging AI’s strengths and weaknesses. Doing so not only boosts trust in the chatbot itself but can significantly enhance the reputation and relationships that users maintain with the companies deploying AI.
For more information about this study, email Park at kepark@hkbu.edu.hk. This project was supported by a 2023 Page/Johnson Legacy Scholar Grant from the Arthur W. Page Center.
Topics:
Blog Post Type: