Civil rights organizations want nondiscrimination steps laid out in NIST’s AI guidance
A group of civil rights, tech and other advocacy organizations called for the National Institute of Standards and Technology to recommend steps needed to ensure nondiscriminatory and equitable outcomes for all users of artificial intelligence systems in the final draft of its Proposal for Identifying and Managing Bias with AI.
The definition of model risk — traditionally thought of as the risk of financial loss when inaccurate AI models are used — should be expanded to include the risk of discriminatory and inequitable outcomes, wrote the group in its Friday response to NIST’s draft proposal.
NIST released the proposal for public comment on June 22 with the goal of helping AI designers and deployers mitigate social biases throughout the development lifecycle. But the letter from 34 organizations — including the NAACP, Southern Poverty Law Center and mostly those in the housing and consumer credit space — makes 12 recommendations for improvements to NIST’s proposal and process.
“It is critically important for NIST to propose a framework that ensures that AI risk analysis works hand in hand with discrimination risk analysis. Moreover, efforts to identify AI risks must not exclude or undermine efforts to promote fair and equitable outcomes,” said Michael Akinwumi, chief tech equity officer at the National Fair Housing Alliance, in a statement. “Any new AI-related initiatives should be reviewed for potentially illegal discriminatory treatment or effect for communities of color and other underserved communities.”
Other proposal enhancements the group suggests include issuing actionable policy statements committing to consumer protection and civil rights laws and outlining expectations and best practices, as well as encourage AI developers to use a diverse workforce.
They are also calling on NIST to recommend regular civil rights and equity training to help personnel catch red flags and to ensure that AI providers are transparent about AI systems and their impact.
On the process side, the group called for NIST to issue a detailed action plan and engage with civil rights activists, consumer advocates and impacted communities.
NIST’s own staff should include people specializing in civil rights to assist agencies and organizations in assessing the potential discriminatory impact of their systems. Staffers working on AI issues should be diverse, according to the letter.
The group further recommended NIST share its methods, data, models, decisions and solutions openly.
Public research analyzing AI use cases and their impact on people and communities of color and other protected classes should be supported by NIST, the letter concludes.