OpenAI’s nonprofit board of directors holds the responsibility of determining when the company has achieved Artificial General Intelligence (AGI). AGI is defined as “a highly autonomous system that outperforms humans at most economically valuable work” (OpenAI). It is important to note that the definition of AGI is not universally agreed upon. The board consists of six members who will make the decision regarding AGI, which will have significant implications for OpenAI and its biggest investor, Microsoft (Kilpatrick).
The Influence of Board Decision
The decision regarding the achievement of AGI will determine the future course of OpenAI’s relationship with Microsoft. OpenAI has a for-profit arm that is legally obligated to pursue the nonprofit’s mission. Once the board declares the attainment of AGI, any future developments related to it will be excluded from IP licenses and other commercial terms with Microsoft, which currently only apply to pre-AGI technology (OpenAI). This decision holds immense importance for Microsoft, as it currently enjoys a partnership with OpenAI for AGI development (Smith).
OpenAI’s Nonprofit Structure
OpenAI’s structure includes both a nonprofit and for-profit subsidiary. The for-profit subsidiary, OpenAI Global, LLC, is fully controlled by the nonprofit. Although it is permitted to generate profits, it is still subject to the nonprofit’s mission (OpenAI). This structure ensures that the nonprofit board remains focused on the broader benefits of AGI for humanity (OpenAI spokesperson).
The current board members of the OpenAI nonprofit are Greg Brockman, Ilya Sutskever, and Sam Altman, who hold executive positions within OpenAI. The non-employee members include Adam D’Angelo, Tasha McCauley, and Helen Toner. D’Angelo is the CEO of Quora, McCauley is a tech entrepreneur, and Toner is the Director of Strategy for the Center for Security and Emerging Technology at Georgetown University. These members have affiliations with the Effective Altruism movement and provide diverse perspectives on AI technology, policy, and safety (OpenAI spokesperson).
The Legitimacy of OpenAI’s Governance Structure
There have been debates about whether OpenAI’s governance structure is appropriate for making critical determinations about AGI. Suzy Fulton, a legal expert, suggests that the structure allows the nonprofit board to fulfill its fiduciary duty towards humanity. The majority of the board members are independent, further ensuring unbiased decision-making (Fulton).
Anthony Casey, a law professor, acknowledges that determining AGI at the board level is unusual. However, from a legal standpoint, there are no impediments if certain issues, such as AGI, are deemed mission-critical and require board oversight (Casey). Despite differing opinions on the imminence and possibility of AGI, OpenAI’s structure and board decision-making remain intact.
Criticism and the Importance of Diversity
Merve Hickok from the Center for AI and Digital Policy raises concerns about OpenAI’s focus on AGI while ignoring the current impact of AI models and tools. She believes that discussions should revolve around OpenAI’s underlying mission rather than debates about board size or diversity. Hickok argues that addressing the legitimacy of OpenAI’s claims regarding AGI is the more crucial issue (Hickok).
The Future of OpenAI and Microsoft
The implications of OpenAI’s vague definition of AGI for Microsoft remain uncertain, especially without complete details about the operating agreement between the two entities. Anthony Casey suggests that conflicts of interest could arise between the for-profit interests of Microsoft and the nonprofit mission of OpenAI. The cap on profits is straightforward to implement, but reconciling conflicting interests may be challenging (Casey). However, OpenAI and Microsoft continue their partnership and express mutual optimism about their collaboration in AGI development (Altman).