October 31, 2024
What do communication practitioners really think about AI-triggered threats at their organization?
We are all familiar with the growing popularity of artificial intelligence over the past few years. We have AI in our phones, in our refrigerators and even in our cars. There has also been a growing demand for AI use in day-to-day work in the communication industry. However, while AI can be cool, it can also bring along ethical threats to organizations.
Ethical threats like what? Well, misinformation, discrimination and bias, privacy violation, and so much more. You may be asking yourself, what are the organizations doing about it if it’s such a threat?
That’s what our study wanted to find out. Using extant ethical AI challenges and the problem of many hands (PMH) framework, we wanted to find out what communication professionals’ attitudes and perceptions toward AI ethical threats were, but also the proposed strategies to manage those threats that have been proposed to them.
We asked 250 in-house and 157 agency communication management and PR professionals in the United States to answer our survey. They were recruited using a survey company Centiment.co in January 2024. We asked them multiple questions, including their frequency of AI use, their perceived confidence in using AI, their efficacy of AI, level of concern about threats with AI, perceived responsibility attribution regarding ethical threats, perceived efficacy of strategies in addressing those ethical threats, and their intention to implement AI ethical practices. So, what did we find? Well, quite a bit.
Let’s break it down.
We found that AI developers were perceived to hold greater responsibility compared to others, except for the AI ethics advisory board, in addressing ethical threats in workers’ day-to-day work. Also, a slightly higher level of responsibility was attributed to C-suite executives, lawmakers and the organization’s IT department. Those in our study attributed less responsibility to entry-level employees and clients. We also found significant connections between perceived responsibility attribution and communication practitioners’ behavioral intent to manage ethical threats.
Practitioners' behavioral intention was also positively associated with their perceived responsibility of leaders in the organization and negatively associated with the perceived responsibility of entry-level employees and clients. That’s to say that communication practitioners in the United States seem more likely to take responsibility themselves when they believe their leader's responsibility is high and are less likely to be responsible when they believe that entry-level employees and clients have higher responsibility.
So you may be asking, who cares? Or what good does this research do?
Well, first, our study helps to fill gaps and further extend the PMH. The theory prioritizes the individual model of responsibility attribution in crisis management. This study helped to show how perceived responsibility attribution is associated with communication practitioners’ behavioral intentions to manage threats. When it comes to how our research helps practically, we have some suggestions for organizations.
In order to effectively manage AI ethical threats, we think that organizations need to build a culture of active responsibility to prevent responsibility evasion and impunity. They need to help strengthen employees’ awareness of the goals regarding crisis management and the negative consequences AI ethical threats hold. Organizations should find a way to improve practitioners' perceived efficacy of strategies that transfer their ethical concerns into engagement in crisis management.
Our study does also suggests that there are some reservations from practitioners. It was found that communication practitioners have concerns about being fully transparent about AI use to their clients, which can pose challenges in both promoting ethical AI principles and establishing a working relationship with mutual trust.
However, we do think this could be resolved by having a culture of active responsibility as clients are more likely to grow trust if the practitioners are actively engaging in problem prevention, and take responsibility for negative consequences of AI technique adoption. Therefore, when these practitioners disclose AI use to clients, it is important to let their clients know their ability and commitment to managing risks and threats that are related to AI use.
Topics:
Blog Post Type: