Study finds companies lacking in AI transparency

By: Jonathan A. Obar, Giuseppina (Pina) D'Agostino, and Motunrayo Akinyemi, York University

Title Card: Evaluating AI Transparency -- Globe painted on a brick wall.

In 2022 the White House released The Blueprint for an AI Bill of Rights, organized to “guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” One of five principles directs companies to provide accessible, plain-language descriptions of AI systems on apps and websites. Indeed, AI transparency is vital as people try to understand the extent to which automated systems may contribute to help or harm.

With funding from the Page Center, we assessed transparency materials from the websites of 40 companies to evaluate the extent of their alignment with the Blueprint. The majority of the assessments took place by reviewing company privacy policies. The sample includes 10 social media companies, 10 brick-and-mortar companies, 10 e-commerce companies and 10 banks.

Findings overall suggest that many companies demonstrate elements of privacy transparency (i.e. details about data retention, storage and processing), but minimal AI transparency. This lack of detailed and easy-to-find information suggests many companies are not aligned with the White House call.

The following are specific recommendations based on our study findings:

Share company AI stories

The Blueprint suggests companies should publicize descriptions of automated systems, explaining how they work, and the role automation plays in company actions and decision-making. Companies should convey how they intend to benefit from AI because consumers “should know how and why an outcome impacting you was determined by an automated system."

Our findings show that while some companies post sparse details to varied website sections, they aren’t sharing much about plans for enhancing decision-making with automation, benefitting from generative AI, or otherwise optimizing services. Far more should be done to prioritize helpful notices and explanations about AI-driven decision-making. This includes posting notice materials and prompts online where individuals are most likely to engage with them. Ensuring privacy policies contain helpful information is a place to start, but more innovative notice strategies are also encouraged

Explain AI risks

The Blueprint notes that companies should proactively assess the extent to which AI systems are discriminatory and organize public disclosures about these assessments. Specifically, companies should convey “testing results and mitigation information ... [these should be] made public whenever possible to confirm ... protections.”

Furthermore, the notice process should be “built into the system design” to ensure that AI system details and any associated implications/risks are described before consumers provide consent as opposed to after automated decision-making occurs.

Provide AI details in accessible and plain language

The Blueprint notes companies should ensure AI transparency materials are “public and easy to find,” provided via “accessible plain language documentation including clear descriptions”. This is to ensure that all individuals, acknowledging varying levels of ability and awareness, can access, engage, and understand notice materials before deciding whether to consent. Children, the elderly, the disabled, non-English speakers, and members of other vulnerable and marginalized communities should be prioritized when organizing notice materials.

Overall, transparency is vital for ensuring consent-based models hold accountable those claiming to advance the responsible development and use of AI. While the Blueprint for an AI Bill of Rights provides a helpful starting point, companies should invest in transparency innovations to address the information asymmetries that contribute to AI inequities and potential harms.

For more information about this study, email Obar at jaobar@yorku.ca. This project was supported by a 2023 Page/Johnson Legacy Scholar Grant from the Arthur W. Page Center and by York University.

Topics:

Blog Post Type: