The development of artificial intelligence (AI) has been revolutionary, with the recent introduction of ChatGPT being a prime example. However, despite its potential, the rapid advancement of AI technology raises several concerns that need to be addressed. Anthropic, a leading AI research lab, along with other organizations, is particularly worried about the destructive power of AI, even as it competes with ChatGPT. Moreover, the impact on job loss, data privacy, and the spread of misinformation have gained attention from various entities, especially government bodies.
The United States Congress has taken proactive steps in the past few years by introducing several bills that tackle the transparency and risk aspects of AI technology. In October, the Biden-Harris administration unveiled an Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. This order provides comprehensive guidelines across various domains, including cybersecurity, privacy, algorithmic discrimination, civil rights, education, workers’ rights, and research. Additionally, as part of the G7, the Administration recently introduced an AI code of conduct.
The European Union (EU) is also making significant progress in regulating AI with its proposed AI legislation, known as the EU AI Act. The focus of this act is on high-risk AI tools that may infringe upon individuals’ rights and systems integrated into high-risk products, such as AI products used in aviation. The EU AI Act outlines specific controls to be implemented for high-risk AI, including robustness, privacy, safety, and transparency. If an AI system poses an unacceptable risk, it can be banned from the market.
While there are ongoing debates about the extent of government regulation in AI and other technologies, it is crucial to recognize that smart regulation benefits businesses as well. Striking a balance between innovation and governance can protect businesses from unnecessary risks and provide them with a competitive advantage. Businesses have a responsibility to minimize the potential negative impacts of the products and services they offer. This is particularly important in the case of generative AI, which relies on large amounts of data, raising concerns about information privacy. Proper governance is necessary to address these concerns and maintain consumer loyalty.
“Generative AI requires large amounts of data, raising questions about information privacy. Without proper governance, consumer loyalty and sales will falter as customers worry a business’s use of AI could compromise the sensitive information they provide.”
Governance also plays a critical role in addressing potential liabilities related to generative AI. In the event that generated materials resemble existing works, businesses may face copyright infringement claims. Additionally, companies can find themselves in a situation where data owners demand compensation for the output that has already been sold. To mitigate these risks, establishing rigorous processes and involving stakeholders in reviewing parameters and data becomes crucial.
It is essential to acknowledge that AI outputs can reinforce societal stereotypes and biases if not properly governed. Decision-making systems, resource allocation, and content distribution can all be influenced by these biases. To ensure fairness and protect the rights and best interests of individuals, appropriate governance must be in place. This includes deploying a diverse workforce, involving stakeholders who may be impacted, and carefully handling data.
“Moving forward, this is a crucial point for governance to adequately protect the rights and best interests of people while also accelerating the use of a transformative technology.”
Furthermore, to effectively manage AI-related risks, businesses need to establish a solid framework and adhere to regulations. Experts agree on the potential threats posed by unchecked AI, such as job loss, privacy concerns, social inequality, bias, and intellectual property issues. Each business should assess the unique risks associated with its operations and develop guidelines to address them proactively. Wipro, for instance, has introduced a four-pillar framework focusing on individual, social, technical, and environmental aspects to promote responsible AI implementation.
Enterprises that heavily rely on AI technologies must prioritize governance as it ensures accountability and transparency throughout the AI lifecycle. A well-governed system documents the training of AI models, minimizing reliability issues, biases, variable relationships, and loss of control. It also facilitates monitoring, management, and direction of AI activities. It is crucial to recognize that every AI artifact is a sociotechnical system, comprising data, parameters, and people. This highlights the importance of not only meeting technological regulatory requirements but also addressing societal aspects. Collaboration among businesses, academia, government, and society is vital to prevent potential issues arising from AI developed by homogeneous groups.
“Otherwise, we’ll begin to see a proliferation of AI developed by very homogenous groups that could lead to unimaginable issues.”