Identifying ethical challenges in GenAI use, finding solutions for corporate communications – Scholar Q&A with Anne Perera

By Jonathan F. McVerry

Smiling woman with long brown hair, blank background. Title Card: Q&A with Anne Perera

As the communications landscape rapidly evolves with the rise of generative AI, questions about ethics, trust and best practices become more and more pressing. Three-time Page Center scholar Juan Meng and first-time scholar Anne Perera, University of Georgia, are examining how AI is reshaping corporate communications and public relations. Through a two-phase study involving in-depth interviews with senior communication executives and a broad, quantitative survey, the scholars aim to uncover how industry leaders are navigating change, and what tools or frameworks might help guide responsible AI adoption.

The project is part of the Page Center’s 2025 research call on the ethics of generative AI. In this Q&A, Perera, a third-year Ph.D. student, discusses the background of this research, the design of the project and the challenges companies face when adopting GenAI.

How did your collaboration start?

Dr. Meng is my faculty advisor. We have worked very closely throughout my doctoral journey. I first started working with Dr. Meng on the North American Communication Monitor [a biennial research report as part of the Global Communication Monitor series, organized and published by the Plank Center for Leadership in Public Relations]. That project gave us early insights in terms of how communication professionals perceive GenAI – the tools, the benefits, the challenges, and how professionals perceive individual challenges and benefits versus organizational challenges and benefits. We built on that with a fellowship I did with Ketchum and the Institute of Public Relations last summer. Through the fellowship, we had the opportunity to interview several agency professionals at various levels of their professional career to gauge their experiences and insights into GenAI application in communication practice.

And that led to your Page Center project?

Yes, it was an ongoing conversation between us. My research focuses on how emerging technologies like generative AI are reshaping the communication practice. Dr. Meng brings a lot of expertise in PR ethics, leadership and global communication. Our plan is to deepen our knowledge in terms of the impact of GenAI in corporate communication practices. We started working on GenAI related research in early 2023, and we saw the Page Center research call. Dr. Meng showed it to me, and we asked if I wanted to work on it with her. So, I definitely jumped on that.

How did you land on this project?

Both of us noticed a gap. AI tools are spreading really quickly in practice, more so in the last year compared to the previous three years. It still has a lot of uncertainties in terms of how it’s used and how it should be used. What are the implications of its use? We found that the ethical and reputational implications were not being studied thoroughly, and that’s where we saw the gap. Most existing research focuses on a descriptive level. It documents things like adoption rates or benefits and challenges, but they’re at a very surface level. Our earlier NACM research shows that nearly 70% of professionals said GenAI will substantially change the profession, but fewer than half use these tools frequently. This gap between optimism and adoption itself highlights why ethics matter. If professionals don’t feel confident about the technology and its implications, adoption will stall. What’s missing is an ethical framework, something like a roadmap for balancing innovation with ethics. This project allows us to approach it from both academic and applied angles.

Walk us through your project.

We are looking at a two-phase design for this research. The first phase involves a series of in-depth interviews where we will talk to around 20 senior communication executives to capture what they think about AI adoption, ethics and reputation. That’s going to give us a bit of a foundation to go into our next stage, which is a quantitative phase based on the findings from phase one. In this phase, we will develop surveys and scale it to involve a national panel of communication professionals. The goal is to come up with some actionable tools for professionals and leaders to use AI responsibly, and for leaders to put policies and training strategies in place. The ultimate goal is to move past the debate of, “Is AI good or bad?” but instead ask things like, “How do we use it responsibly in corporate communication?”

Are there examples of challenges or issues you are already aware of that these tools or frameworks could potentially help with?

We’ll have to address transparency issues, privacy issues and accountability issues. Those issues came up a lot in our previous study, but it was a different time. The GenAI landscape has already changed. That’s something else we’ll need to address so we can come up with a framework that can scale these issues for organizations. Some examples ... in our proposal we highlighted that 60 percent of professionals worry about privacy and credibility in AI-assisted communications. A real-world example is Air Canada’s chatbot issue last year, which gave inaccurate information about bereavement fares to a grieving passenger. Who’s accountable? The tribunal ultimately ruled that the airline was responsible, reinforcing the ethical principal that organizations cannot disclaim accountability for their automated systems. And another, one of the junior professionals we talked to said that he's not afraid of AI replacing him, but he is afraid of being asked to use the technology without really understanding it. That kind of captures the stakes really well. It isn’t just about efficiencies but about trust and readiness.

What’s the Page Center’s role in this project? How does the support help?

The Page Center’s role and support is absolutely, very important in putting this study together. The funding makes it possible for us to run both stages of this project – from the interview space to the large-scale national survey. It’s something that would not be possible or would be difficult otherwise. And what's equally important is the amplification of this study, through blogs and Research Roundtables and other networks. It’s making sure that our findings are reaching professionals who can put them into practice and actual use. So, the Page Center doesn't only fund research for us, it actually shapes ethical conversations, which is important, especially now with new conversations on AI. We're both more than honored to be part of that mission.