Combating AI-generated crisis misinformation – Scholar Q&A with Liang (Lindsay) Ma

By Jonathan F. McVerry

Title Card: Q&A with Page Center scholar Liang (Lindsay) Ma. Scholars will examine AI's threat to effective disaster response.

Fueled by convincing AI-generated deepfakes, misinformation has become another urgent challenge for crisis responders. During a catastrophe, AI-powered distortions of misinformation can spread faster than aid. First-time Page Center scholars Jie Zhuang, Texas Christian University, and Liang (Lindsay) Ma, University of Massachusetts-Boston, are studying how people perceive and combat this new wave of misinformation. Their study explores how different sources — official agencies, social media users, and news organizations — help people identify AI-created misinformation and therefore reduce its negative impact on people who are victims of natural disasters.

The project is part of the Page Center’s 2025 research call on the ethics of generative AI. In this Q&A, Ma talks about the ethical responsibilities of AI developers, news agencies, and users. She also discusses a plan to understand how people process AI-generated misinformation and develop strategies to limit its spread during critical situations.

Where did this research idea come from? How did you both get started?

I recently accepted a position at the University of Massachusetts-Boston. I was at Texas Christian University before that, where Dr. Zhuang and I were colleagues. We are also very close friends. Dr. Zhuang’s research area is persuasion and health/risk communication. My research areas are crisis communication, risk communication and consumer behavior. When the Page Center put out the call about AI and public communication, we thought it was an excellent research area where our research interests merge and we could collaborate.

How does your research backgrounds intersect with this project on AI?

The impact of AI is new in both of our research areas. It's hard to believe that AI appeared in our lives just more than two years ago. We can all see the impact that it has on our lives, right? So, when we think about AI and its impact on public and crisis communication, we feel that AI can be very useful, but also harmful if it is misused or abused. Public communication after large-scale disasters is an area that is heavily impacted by AI-generated misinformation. For example, disaster victims may not seek or receive timely relief help if they believe AI-powered misinformation. This is also an area that both of us deeply care about. In this project, we want to understand whether simple cues, such as information source and label, can help people who are vulnerable after disasters combat the misinformation.

Ideally, we should be able to use AI as a powerful tool to fight misinformation, but unfortunately, AI is not used to its best ability. So, before we understand how we can use AI to fight misinformation, which is a topic that may be too early for us, we need to study how to fight AI-generated misinformation.

Can you talk more about the process of AI training and why it’s too early to use AI to fight misinformation?

It's about the ethics of public communication. The competition in AI development is so fierce. The big tech companies are trying to develop their AI so fast, and I think one of the questions that gets neglected is, “How can we make sure that the information we feed into the AI training is actually 100% accurate?” If the data that you feed to the AI is not accurate, how can you make sure that it can verify some of the information? Can I just type into ChatGPT or another AI model and say, “Hey, is this accurate?” And then ideally, the AI model should be able to tell me, “Well, this is not accurate or this is accurate … And here are the accurate sources that you can look into.”

Can you share the plan for your project?

We are going to conduct a two-by-three experimental study to understand how people combat AI-generated misinformation. The first experimental factor examines whether telling people that content is AI-generated affects their information processing. The second factor explores which sources are most effective at correcting misinformation: official agencies like FEMA, social media users, or news agencies. We'll measure three key variables from the situational theory of problem solving (STOP): problem recognition – how much people realize AI-generated misinformation is an issue; constraint recognition – how much people believe they can stop misinformation's spread; and involvement recognition-how much people think misinformation affects them. By understanding how different sources, AI-generation labels, and these three STOP-related variables impact information perception and processing, we hope to develop strategies for fighting AI-generated false content and helping people more effectively stop its spread.

Do you feel less susceptible to AI-made misinformation due to your research interests and expertise?

It has made me more cautious when I am exposed to information that I'm not familiar with. When I see something on social media, if it's important for me, then my first reaction is, well, I need to check the sources and make sure that this is actually accurate.

What practical uses do you expect to generate from this project?

We are hoping that the results can have a very strong ethical implication for the general public and AI developers and engineers. We expect that the results of the study can help them understand that fighting misinformation is not just the job of the news agency. It's not just the job of the official agencies like FEMA. Rather, it is a collaborative effort. For example, AI developers and engineers also need to think about the ethical aspects of their work. Everybody needs to play a role. I think that we all have a responsibility to make sure that the information we share is accurate before we share or pass the information. This does not just apply to public communication, but also to corporate communication. We also hope that the results can provide some guidance to organizations, including companies who are troubled by misinformation, on how to stop the spread of misinformation among their publics.

What is the Page Center’s role in this project and helping your research be successful?

I really appreciate the research topic this year, because this is a much-needed research area. Researchers have just started to look into AI and its role in crisis communication and public communication, and I think the Page Center’s call for proposals is certainly a major force to motivate scholars to look into this area with greater depth. Everybody, including researchers, are still trying to understand the impact of AI, and the topic call from Page Center this year is very timely. I also appreciate the fact that the projects sponsored by Page Center this year are very diverse, but all the projects have a unified theme of AI in public communication.