July 07, 2025
Ethical implications of virtual influencers in advertising – Scholar Q&A with Nadine Walter

Garnering millions of followers, virtual influencers have become a prevalent piece of the ever-growing world of artificial intelligence. These digital personas simulate human characteristics and connect with users through popular social media outlets. First-time Page Center scholar Nadine Walter, Pforzheim University, is examining the ethical considerations surrounding artificial influencers' development and how they interact with users. Her work not only explores the technological potential of virtual influencers, but seeks to answer questions about authenticity, responsibility and human-AI relationships in the digital marketing landscape.
The project is part of the Page Center’s 2025 research call on the ethics of generative AI. In this Q&A, Walter talks about the emergence of virtual influencers, their future in advertising and the ethical questions she hopes to answer through her Page Center research project.
When did you become interested in the world of virtual influencers and branding?
It started when they first appeared, basically. I remember in 2018, Time Magazine voted the virtual influencer Lil Miquela as one of the 25 most influential internet personalities. This created a big buzz, and then she gained more and more followers. Since part of my research had already focused on influencer marketing, I was interested to understand how this new phenomenon of virtual influencers would develop. My German university and Penn State have had a relationship for more than 20 years. I know Colleen and Lee Ahern [Page Center senior research fellows], and they came over to teach at my university. Last spring, I spent the semester at Penn State. I was introduced to Shyam Sundar, and he does research on chatbots and how they communicate with humans. I find it interesting, because virtual influencers are like chatbots, but with influencers, you can build relationships.
And that led you to apply for the Page Center grant?
Yes, I was introduced to Heather Shoenberger [Page Center research fellow] too. She is co-leading the call, and she had done research on virtual influencers before. We connected and started doing work on virtual influencers together. She told me that the Page Center’s next research call was going to be on AI, and I knew that this is a critical topic for virtual influencers.
In the areas you study, how has AI affected advertising?
AI is so advanced in chatbots already. Chatbots can solve functional problems – they can book your flight or tell you how much money you have in your bank account. But with influencers in advertising or communication, AI is not that advanced yet.
Because, when we look at virtual influencers – and this is a big part in the research – they're still human backed. There is still an agency behind them but eventually they will be fully AI-driven and autonomous. It might be a few months, it might be one or two years, but it will come. I imagine that Lil Miquela will be able to talk to each follower individually. This will be a big game changer when AI can fully autonomously control these virtual influencers, because of the opportunities for scalability.
How do you feel about these advancements happening in a few months or a few years?
[When presenting at the Page Center’s Research Roundtable], one of the advisory board members said she finds this scary – and she’s right. I think that's why this research call at the Page Center is so valuable, because a lot of ethical questions arise. I mean, what happens if a virtual influencer does anything wrong and they're AI backed? Who's responsible for that? Who will take responsibility – the brand, the influencer, or will they be able to say, “It's just AI”?
The other question that is central in this research: How much are people treating these virtual people as humans? How involved in the relationship do they get? Will people feel deceived if a virtual influencer does things he or she cannot do, like consume products? Or will they know it’s artificial and it's just a fun game to play?
How do you plan to answer these questions? What is your plan for the project?
We’ve built a model already to test in an empirical quantitative study. We will look at deception, and we will look at the difference between human-backed versus AI-backed influencers. We will also look into third person narrative and sensory cues. What role do they play? Might they reduce a potential deception consumers might feel? Currently, we are bringing the questionnaire together and working on the stimulus.
How does this project fit into the AI research that’s already out there and growing?
There is a lot out there, of course. A lot of scholars in marketing and advertising are focusing on AI. This field of virtual influencers has grown, but it’s not super exploding. When you look at chatbots, there is much more out there, but with virtual influencers it’s still a little underdeveloped. I think the main reason is that fully-controlled AI influencers do not exist yet. Once they will, there will be a push to research more.
What practical outcomes do you foresee coming out of your project?
We mainly look at the research for brands and how they might use virtual influencers. In the future, brands will develop their own virtual influencers to promote their products instead of paying licensing fees to existing virtual influencers. Why should they pay fees to Miquela if they can build their own avatar? It is now already possible for brands to create their own avatar on TikTok. The trend is happening.