Leading figures at OpenAI have called for the establishment of an international agency responsible for inspecting and auditing artificial general intelligence (AGI). In their statement, CEO Sam Altman and co-founders Greg Brockman and Ilya Sutskever emphasized the need to ensure the safety of Artificial General Intelligence (AGI) for the benefit of humanity.
The executives expressed concerns that AGI could surpass human capabilities within the next decade, highlighting the significance of managing the potential risks associated with this technology. They argued that superintelligence, in terms of its benefits and drawbacks, presents a greater challenge than any previous technological advancement.
To address this issue, Altman, Brockman, and Sutskever proposed a regulatory framework similar to the International Atomic Energy Agency (IAEA) for supervising AGI development. They advocated for an international authority that would inspect systems, mandate audits, enforce compliance with safety standards, impose restrictions on deployment and security levels, and monitor vital resources such as computing power and energy consumption.
The OpenAI executives suggested that an agreement could be reached on limiting the rate of AI capability growth at the frontier. They emphasized the voluntary nature of inspections, with companies willingly subjecting themselves to scrutiny. The proposed agency’s primary focus would be on mitigating existential risks rather than dealing with regulatory matters governed by individual national laws.
The sources for this piece include in article in TheRegister.