International Code of Conduct for Organizations Developing Advanced AI Systems

Global government leaders are continuing to prioritize and address the risks and opportunities associated with artificial intelligence (AI). In a recent move by the Group of 7 industrial countries (G7), the International Code of Conduct for Organizations Developing Advanced AI Systems was announced. This voluntary guidance aims to promote safe, secure, and trustworthy AI, building on the “Hiroshima AI Process” introduced in May. Government actions surrounding AI are gaining momentum, with U.S. President Joe Biden issuing an Executive Order on “Safe, Secure and Trustworthy Artificial Intelligence” on the same day. The European Union (EU) is also finalizing the financially binding EU AI Act, and the UN Secretary-General recently established the Artificial Intelligence Advisory Board, composed of government, technology, and academic leaders, to support international governance efforts.

The Vision for Advanced AI Systems

The G7 emphasizes the innovative opportunities and transformative potential of advanced AI systems, particularly foundation models and generative AI. However, they also recognize the need to manage risks and protect individuals, society, and shared principles, such as the rule of law and democratic values. To tackle these challenges, inclusive governance for AI is essential.

Guiding Principles for Responsible AI Development and Deployment

The G7, comprised of the U.S., EU, Britain, Canada, France, Germany, Italy, and Japan, has released an 11-point framework to guide developers in responsible AI creation and deployment. While acknowledging that different jurisdictions may implement unique approaches, organizations are called upon to commit to this code of conduct.

  • Take appropriate measures throughout development to identify, evaluate, and mitigate risks: This includes conducting red-teaming, testing, and mitigation efforts to ensure trustworthiness, safety, and security. Developers should enable traceability in relation to datasets, processes, and decisions.
  • Identify and mitigate vulnerabilities, incidents, and patterns of misuse after deployment: This involves monitoring for vulnerabilities, incidents, and emerging risks and facilitating third-party and user discovery and incident reporting.
  • Publicly report capabilities, limitations, and appropriate/inappropriate use of advanced AI systems: Transparency reporting supported by robust documentation processes is crucial to build trust.
  • Work towards responsible information-sharing and reporting of incidents: This includes evaluation reports, information on security and safety risks, and attempts to circumvent safeguards.
  • Develop, implement, and disclose AI governance and risk management policies: Personal data, prompts, and outputs should be considered in these policies.
  • Invest in and implement security controls: This involves physical security, cybersecurity, insider threat safeguards, and securing model weights, algorithms, servers, and datasets.
  • Develop and deploy reliable content authentication and provenance mechanisms: Watermarking and provenance data that identifies the service or model responsible for creating the content should be implemented. Users should also be informed that they are interacting with an AI system.
  • Prioritize research to mitigate societal, safety, and security risks: Conducting research, collaboration, and investing in mitigation tools are crucial in addressing potential risks.
  • Prioritize AI systems to address global challenges: Organizations should focus on developing AI systems to tackle pressing issues like the climate crisis, global health, and education. Supporting digital literacy initiatives is also important.
  • Advance the development and adoption of international technical standards: Contributing to the development and use of international technical standards and best practices is essential for responsible AI development.
  • Implement appropriate data input measures and protections: Personal data and intellectual property should be safeguarded, including transparent training datasets.

The G7 emphasizes that AI organizations must adhere to the rule of law, human rights, due process, diversity, fairness, and non-discrimination, and prioritize “humancentricity.” Introduction of advanced systems should not be harmful, undermining democratic values, facilitating terrorism, enabling criminal misuse, or posing substantial risks to safety, security, and human rights. The group is committed to introducing monitoring tools and mechanisms to hold organizations accountable. The code of conduct will be continuously updated based on input from government, academia, and the private sector.

“Trustworthy, ethical, safe and secure, this is the generative artificial intelligence we want and need” – Věra Jourová, European Commission’s Vice President for Values and Transparency

“The potential benefits of artificial intelligence for citizens and the economy are huge. However, the acceleration in the capacity of AI also brings new challenges. I call on AI developers to sign and implement this Code of Conduct as soon as possible” – Ursula von der Leyen, European Commission President

The G7 leaders assert that their efforts aim to maximize AI benefits while mitigating risks for the common good worldwide. This includes ensuring digital inclusion and closing the digital divide in developing and emerging economies. The code of conduct has received approval from various global government officials, highlighting the importance of trustworthy, ethical, and responsible AI development.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts