June 27, 2025
Nudging biases out of AI-created content – Scholar Q&A with Haoran Chu

Stereotypes and biases can run rampant across AI-generated content, but it’s not clear how they affect users’ perception of artificial intelligence and the output it produces. A Page Center study is asking whether people attribute AI authorship differently based on cultural stereotypes. Leading the project are scholars Haoran Chu, Rita Linjuan Men and Yuan Sun, University of Florida; Sixiao Liu, University of Central Florida; and Shupei Yuan, Northern Illinois University. In this study, they are testing whether a technique called "nudging" can help reduce bias.
The project is part of the Page Center’s 2025 research call on the ethics of generative AI. Chu, Men and Yuan have been funded by the Page Center multiples times. Sun and Liu are first-time Center scholars. In this Q&A, Chu talks about the making of the research team, how nudging works and how their project could potentially enhance communication practices.
First off, how did your team come together for this project?
We have a big team, and we have worked together on multiple projects. We have scholars from all kinds of backgrounds – health-related research, public relations work, prosocial communication, environmental and science communication. All of us have a shared interest in artificial intelligence and how that can be used to promote the general well-being of our society.
Can you talk about the concept of ‘nudging’? What does that mean?
Nudging in an earlier form, for example, could be something like opt in versus opt out. For instance, being an organ donor is an opt out process. So, people found out that if you set something to opt out, the majority of people will stay with the default. That is one form of nudging. You don't really force people to make a choice, but the simple design change would alter people's behavior. So, in the context of AI attribution, we use a simple message telling people that different social or cultural groups use artificial intelligence, as research shows, at the beginning of the message. Then we let them make the authorship attribution. That could be a very quick note telling people that everyone can use AI, it's not just one particular group.
What are some practical outcomes you hope emerge from this research?
I believe our findings could inform a fairer communication evaluation practice. A lot of students are using AI, for example. Should we penalize them for doing that? Or, if people are penalizing the use of AI, will that lead to any unfair practices? Our study will speak directly to that. We believe that nudging offers a simple, very scalable tool to reduce bias, and I believe this study would also help protect marginalized communicators from being penalized unfairly compared to others.
How is this project put together? What is your plan?
We are designing the first experiment this summer, and we plan to launch it in August or September. After that, based on the first experiment, we will design our second experiment, testing the effect of nudging and whether it could be used as a useful intervention. We plan to do that in the fall. If the findings are good, as we expect – and I'm fairly positive that it will be pretty good based on the pilot testing – we will reach out to conferences, publications and other outlets.
How or where do you see this project fitting into the fast-changing AI research literature?
Yes, there have been lot of AI-focused studies published in the last three years. The volume is even greater than COVID papers during COVID. We really want to make a difference, and I think this is very valuable research. Some of us have done work on using AI to generate content and see if that qualitatively and quantitatively changes people's response to AI. But our team is interested in finding out what's the implication of AI to our society. This study is not really about how AI is being used. It's about how people see AI and how people see other people using AI. It is a more traditional view of social science. We want to find out what kind of change would AI bring to human behaviors and human perceptions of other human beings.
How does the Page Center help make this project happen and help your team reach its research goals?
We are very grateful to the Page Center. Dr. Yuan and I’s last project with the Page Center generated four publications already – and two more are in process. So, I would say the Page Center is crucial in bringing this work to life, and just like other work of mine and my colleagues, providing funding, visibility and the great alignment of mission is important. The Page Center’s emphasis on ethical communication made this project possible at a critical time. We are also very excited to share our work through the public facing channels provided by the Page Center. We appreciate all the support.