Report says AI policy is messy. Organizations are adapting with tried-and-true ethical principles

By Megan Pietruszewski Norman, Penn State

AI disclosure and transparency

Last year, I attended the International Public Relations Research Conference (IPRRC) in Orlando, FL. As I moved from table to table to hear research presentations and engage in discussions, it was clear that one topic dominated the conversation for both academics and practitioners: artificial intelligence.

It’s easy to understand why AI was a dominant topic at the conference. Reading the news or simply doing a Google search these days, and it’s hard not to encounter something related to AI. The public launch of OpenAI captured news attention, which has only grown with the release of dozens of other generative AI (GenAI) tools like Gemini, Claude, Copilot, and Midjourney. But with the availability of GenAI tools comes questions about how to use them transparently and ethically.

It was at IPRRC that I met Tim Marklein, CEO of Big Valley Marketing. This connection led to an opportunity to work as a remote summer research consultant for Big Valley. Together with another student, Morgan van den Berg from Michigan State, we dove into the literature regarding AI disclosure and transparency.

Our research questions asked what approaches had been proposed for AI transparency and disclosure, and what were consumers’ expectations regarding AI transparency and disclosure. We reviewed dozens of publications sourced intentionally from different perspectives: government, industry, academic, and professional organizations. At the end of our summer work, we had incorporated 99 sources into our final report.

In summary, we found that the AI disclosure and transparency landscape is messy. Although disclosing the use of AI and being transparent about its use seems commonsense, there appears to be no universal definition of “disclosure,” and it can mean different things to different people. For example, is it enough to say that an AI tool was used, or do we need to specify which tool was used, to what degree, and why? Some academic publishers now require disclosure with specific details.

What does seem to have agreement is that disclosure and transparency are seen as necessary actions to minimize the risks of GenAI. We have all heard stories of AI hallucinating or spreading inaccurate information (like putting glue on pizza). The risks of AI use like copyright infringement, plagiarism, bias and misinformation are real concerns that users of GenAI need to consider.

Surveys show that consumers also want to know when they are encountering AI-generated materials. A Getty Image survey showed that 90% of consumers want to know when an image they see is generated with AI, and a 4A’s survey similarly showed that 72% of consumers want to know when they see advertisement content created with AI.

So where do we go from here?

Many professional organizations are turning back to existing codes of conduct and ethical principles to provide guidance. Some organizations, like the Institute for Public Relations, focus on substantial use as the threshold for disclosing GenAI use – and provide a disclosure template for research publications. The Public Relations Council provides guidelines on generative AI use and includes a list of AI bias questions to ask. Many professional organizations and academic publishers emphasize the need for human oversight and accountability for AI-generated material.

Government action is moving on AI transparency as well, with numerous bills introduced. Some focus on election integrity and disclosing with AI is used in political advertisements. Internationally, the EU passed the most comprehensive AI regulation to-date with the EU AI Act. It is important for communication practitioners to know the local regulations around AI use and disclosure.

Building off the secondary analysis, Big Valley offers suggestions for AI disclosure, such as aligning AI disclosure policies with brand strategy and carefully considering the authorship label. You can read the full report and their recommendations at their website.

“Proactive disclosure of AI use – whether it’s part of a product, service or content – will ultimately be beneficial to fuel AI trust and adoption, but it comes with an array of short-term challenges,” Marklein said “Organizations need to be thoughtful and authentic in their AI disclosures, listen attentively to stakeholder feedback, and adapt accordingly. As always, trust will be earned over time through both words and actions.”

You can read the full report, including recommendations, on the Big Valley website.

Megan Pietruszewski Norman is a doctoral student in the Donald P. Bellisario College of Communications at Penn State and a member of the Page Center's Graduate Student Lab Group.

Topics:

Blog Post Type:

Keywords: