[ Article originally appeared in https://greenlining.org ]
By Christine Phan,
Building Transparency in the Age of Algorithms
What does AI transparency mean, and why do we need it? In this age of technology, artificial intelligence algorithms power the software behind life-changing decisions in employment, housing, and more. The tech world has a long history of leveraging software to maximize efficiency. Without oversight, these decision-making algorithms have the potential to create and replicate and create new kinds of discriminatory practices, deepening existing inequality particularly for people of color and low-income people. This phenomenon is called algorithmic bias, and it occurs when an algorithmic decision creates unfair outcomes that unjustifiably and arbitrarily privilege certain groups over others. Given the stakes involved in decisions like whether or not to provide loans or offer jobs, it would seem like a no-brainer to ensure those impacted have an understanding of how these decisions are made.
We need insight and oversight on how these systems work, but AI transparency remains complicated. The few companies that do offer insight into AI-based decisions provide explanations that are not meaningful or helpful. For instance, Facebook (Meta)’s “Why am I seeing this?” tag on social media posts doesn’t actually provide transparency into its algorithm or how consumer data is used to determine which paid ads to promote over others. Without offering meaningful and specific explanation, Facebook denies its consumers autonomy over how their data is used.
The practice of building out documentation and risk assessments can help bridge the knowledge gap between algorithms and their outcomes. Documentation and risk assessments are tools that outline how and why algorithms are designed, function, and perform, and provide an opportunity to investigate ethical and legal considerations around the potential harms they cause. In the past several years, researchers and policymakers alike have used documentation and risk assessments to increase AI accountability — from Google’s Model Cards to the European Union’s General Data Protection Regulation (GDPR).
Documentation can take many different forms — but with limited space and infinite questions, it’s important to establish transparency guidelines that capture what’s most useful.
Documentation standards should be designed with three main stakeholders in mind–those impacted by algorithmic decisions, regulators, and developers. To serve the unique goals of each stakeholder, key priorities to keep in mind for each group include:
-
For impacted people and community members: information on the presence of bias
-
For regulators: the results of bias testing and businesses’ explanations justifying any potential harm
-
For industry and AI developers: clear examples of documentation & risk assessments to emulate
Impacted People & Community: Agency and information
AI systems are created from peoples’ data to make decisions on their behalf. It only makes sense that transparency should aim to center what information would be most useful for impacted people.
People need pathways to understanding how AI systems relate to them, and if these systems lead to biased outcomes that may impact them. To do this, any impact assessment should provide insight into how the AI system performs on disaggregated subgroups of race, ethnicity, gender and other protected groups. This information on bias must be public and easy to access, as opposed to being limited to a select group of people or regulators. Once this information is made accessible, future policies can be designed to empower people to make decisions about how individuals participate and engage with these AI systems, such as opting out of AI-made decisions, civil action, and more.