Report Reveals Significant Flaws in AI Governance Tools

According to a recent report by the World Privacy Forum, a comprehensive review of 18 AI governance tools utilized by government entities and multilateral organizations indicates that more than a third of these tools include “faulty fixes.” These tools, designed to evaluate and measure the fairness and explainability of AI systems, have been found to be inadequate and ineffective in their purpose. The report highlights deficiencies in quality assurance mechanisms typically present in software, as well as the use of measurement methods that are unsuitable when applied beyond their original intended use.

It is worth noting that some of these flawed tools and techniques have been developed or endorsed by prominent tech companies such as Microsoft, IBM, and Google, all of which are responsible for creating the very AI systems being evaluated. For instance, IBM’s AI Fairness 360 tool has been lauded by the US Government Accountability Office for its guidance on integrating ethical principles like fairness, accountability, transparency, and safety into AI use. However, the report reveals that the research underpinning the tool’s “Disparate Impact Remover algorithm” has faced significant criticism in scholarly literature.

“Most of the AI governance tools in use today are struggling to meet the mark,” said Pam Dixon, the founder and executive director of the World Privacy Forum. “One major issue is the absence of established requirements for quality assurance or assessment.”

Dixon further explains how certain AI governance tools lack essential documentation, providing no context or conflict-of-interest notices. Consequently, these tools may be used for purposes they were not designed for, leading to potentially unfavorable outcomes. The report defines AI governance tools as resources used to evaluate and assess AI systems’ inclusiveness, fairness, explainability, privacy, safety, and other trustworthiness aspects. These tools encompass practical guidance, self-assessment questionnaires, process frameworks, technical frameworks, technical code, and software.

While the utilization of AI governance tools may provide reassurance to regulators and the public, they also have the potential to create a false sense of confidence and unintentionally cause problems that undermine the promise of AI systems. With the recent introduction of the EU AI Act and the release of President Biden’s AI Executive Order, the report emphasizes the importance of examining how governments and organizations are adopting governance toolsets.

“This is an opportunity to assess and improve the ecosystem of AI governance,” said Kate Kaye, deputy director of the World Privacy Forum. “These tools play a vital role in implementing AI policies and will be crucial in executing future AI laws and regulations, such as the EU AI Act.”

Kaye provides an example of how well-intentioned AI governance efforts may go awry due to the use of inappropriate tools and techniques. The four-fifths rule, used in US employment law to evaluate adverse impact on particular groups, has been abstracted and misapplied in some AI governance tools. This rule has found its way into private sector contexts in countries like Singapore and India, unrelated to its original purpose of employment evaluation.

While governments and organizations may feel the pressure to establish legislation and regulations, as well as adopt AI governance, it is crucial to avoid embedding problematic methods into policies that introduce further issues. Dixon and Kaye express optimism for improvements in AI governance tools by 2024. The World Privacy Forum has received assurance from the OECD (Organization for Economic Cooperation and Development) and the National Institute of Standards and Technology (NIST) of their willingness to collaborate on enhancing these tools.

“The OECD is at the forefront of AI governance tools,” Dixon stated. “They have shown a commitment to working with us to drive improvements, which is incredibly encouraging.”

In addition, NIST is eager to contribute, aiming to establish a rigorous evaluation environment based on evidence and standardized testing procedures. Dixon believes that with concentrated effort, meaningful advancements can be achieved in the AI governance tools landscape within a relatively short timeframe of six months.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts