Study to reveal journalist perceptions of AI-use in press releases – Scholar Q&A with Renita Coleman

By Jonathan F. McVerry

Title Card: What do journalists think about AI-written press releases?

Receiving press releases is nothing new for journalists, but artificial intelligence has altered the script. The rapidly evolving technology is becoming more accepted among public relation professionals, and a duo of scholar are curious, “How do journalists feel about that?” Nine-time Page Center scholar Renita Coleman, University of Texas-Austin, and first-time scholar Kami Vinton, Sul Ross State University, are investigating whether transparency about AI-use in press releases impacts credibility. Based on a three-part experiment involving active journalists, the scholars hope to provide practical insights that public relations professionals can use when disclosing AI in their work.

The project is part of the Page Center’s 2025 research call on the ethics of generative AI. In this Q&A, Coleman talks about the inspiration for this project, the plan for the study and why public relations professionals should care about journalists’ opinions of AI.

Share the story of this research. Where did the idea come from? How did it lead to this grant?

I wanted to work with the Page Center again. I was one of the first Legacy Scholars when the Center first started funding grants. I haven’t done one in a while, and when I saw this call about AI, I thought it would be a great opportunity. In this case, it is like the Page Center is the idea generator.

It started on a Reddit group where they talk about PR issues. There was a discussion about an AI tool that wrote press releases. One post said it didn’t matter how good the tool was if journalists aren’t interested in the content. I thought that was interesting, because the PR Council is pretty clear about disclosing AI when and where you use it. So, I read hundreds and hundreds of press releases on PR Newswire, and I didn’t see any of that disclosure happening. I never saw AI mentioned.

What is the consensus about AI use in communications in, for example, news stories?

Regular readers believe that AI in journalism is unbiased, because there's not a person associated with it. They say it’s better than a person with biases. Of course, this is ridiculous, because people train AI. So, whatever biases we have – and we all have them – AI will sometimes make it worse, because it’s unable to cloak it. People who know a lot about AI know that it's just scraping what's out there already, and that there are biases and hallucinations … like hallucinating 10 books that don’t exist.

We know that PR professionals and journalists like AI, because it saves time. They worry about it though. They know it can get things wrong and that it might be replacing them, but they like certain things. So, we know what PR people think about it in PR and what journalists think about it in journalism, but not what journalists think about it in PR. That is why I felt there was a need for this study.

You’ll be running an experiment with journalists. Can you share your plan and what each part of the study looks like?

There is a service called Prowly. It has its own AI press release generator, but it also has a database of journalists. I am going to use that service to pick journalists and send them an invitation about participating in this study. I am hoping that I can get reporters from small papers, medium-sized papers, large papers, TV stations, radio and online. I have found that journalists are happy to participate in academic research if they think it's going to help them.

There will be three groups, and they will read three press releases. The only difference is that one group will get three press releases marked as written by the company and a second group will get releases marked as provided by AI. There will be a little picture of a bot that says news provided by the organization or news provided by AI. Then, a third group will have both of those, the logo of the company, plus the logo of the bot. It’ll say, “News provided by the company through AI” and also say what it was used for – research, writing the headline and proofreading. The PR Council recommends that press releases disclose when AI was used and how it was used.

Can you talk about your expectations, and what the results might provide communicators?

I'm mainly doing this for PR people. They will know if it's better to tell journalists that they used AI and what for. Based on theory and the PR Council's recommendation, that's my expectation. I want PR people to know that if you say you used AI and what you used it for in your press release, journalists are going to trust it more. They're going to be more likely to use it.

How does the Page Center grant fit into this project? How does it help make it happen?

The Page Center is great. I can’t say enough good things about the Center. Everybody there is so helpful. This wouldn’t be happening without the call for proposals on AI ethics and public communication. Like I said, the Center is an idea generator. I also like that it’s so practical. While I’ve made my name doing theoretical research, I’m a practitioner at heart. I really believe our research should help professionals design better communications.