August 28, 2025
Exploring transparency’s role in combating AI-generated misinformation – Scholar Q&A with Bingbing Zhang
It’s no secret that misinformation has become a growing challenge in the digital age. In many ways, artificial intelligence technology has supercharged those challenges by making it easier to create and spread deceptive messages. First-time Page Center scholar Bingbing Zhang, University of Iowa, is leading a project that examines current AI labeling practices (i.e. warnings or transparency statements). Her Page Center project will build a better understanding of which labeling strategies work best, and it will offer potential policies for social media companies to keep audiences informed with reliable, factual information.
The project is part of the Page Center’s 2025 research call on the ethics of generative AI. In this Q&A, Zhang talks about the challenges of combating AI-generated misinformation, the effectiveness of labeling and how a person’s prior beliefs influence their response to misinformation.
What are your research interests? How did they lead you to this project?
My research mainly focuses on media effects, political communication and health and science communication. I'm interested in how media messages affect people's beliefs, attitudes and behaviors and finding strategic and effective messages that help people gain more awareness of political issues and health and science issues. So, I have been very interested in misinformation and how to correct people's misbeliefs about those issues. That line of my research has been working on how to label misinformation, and whether labeling will help people be more aware of misinformation and be more resistant to it.
What should we know about people’s resistance to misinformation?
I have looked at warning labels on social media such as Twitter, now X, or Facebook. I'm interested in how these kinds of warnings will trigger people to be more careful about information. Some of my research found that people are more likely to fall into motivated reasoning, which means their decision on whether a piece of information is factual is more likely to be based on their partisanship or political ideology. That made me think about whether simple labeling will work, or the kind of labeling, or what kind of corrective measures are more effective. That is what I want to test.
Where does the AI fit in?
With the development and progress of AI technology, it’s very easy for people to create misinformation. So, I recently got interested in AI-generated misinformation, particularly in political advertising. Some social media outlets require advertisers – all advertisers, not just political – to label whether their content is AI-generated or not. But simple labeling might not work that well. So, I want to see if a transparency statement about how AI technology will change how people think about that piece of advertising. Now with social media getting rid of third-party fact checkers and using community notes, I want to know whether transparency statements by third parties or by the advertisers will work better.
Once you have the results, how do you foresee communicators and advertisers using this new information in their work?
I anticipate that advertisers will be more transparent about how they use AI technology in creating advertising. This increased transparency can help people become more aware and thoughtful about the ethical aspects of AI in advertising. Without statements about how AI is used, people may not even consider the possibility that a message was generated by AI, which could affect their trust and understanding. I also expect that when others — whether it’s third parties or the community — label a piece of information as AI-generated, people will be more likely to question its authenticity, even if it isn’t misinformation. This means advertisers who use AI ethically might want to proactively disclose their use of AI to avoid misunderstandings.
As a user, not as a researcher, what are you seeing in your social media feed in terms of AI-generated content? Has being more aware of its capabilities affected how you feel about it?
I feel like I’m paying more attention, or I at least have that awareness because I'm a communication researcher. It’s something I think about, but it is very hard to distinguish, even for people who are aware of it. It’s still hard to process. That's why I feel it's very necessary for the advertiser to review itself. I think social media platforms realize that. For example, if you are uploading AI-generated video onto TikTok, they have the labels and stuff for you to label it AI-generated.
How does this grant and the support from the Page Center help make this project happen?
I was a Ph.D. student at Penn State, so I am familiar with the Page Center. I think the Page Center is doing really great stuff. First of all, it provides the resources and funding to help launch this study. But I think the more beneficial thing is that the grant also provides a platform to hear feedback from people in the industry. I really like the Research Roundtables where the board members give us feedback on our research – especially the practical implication side. They help me think more about how my research can benefit the industry. I also really like how the Page Center offers networking opportunities that give me the chance to connect with other researchers who are doing similar work.