Anthropic, Google, Microsoft, and OpenAI have joined forces to launch the Frontier Model Forum to ensure the “safe and responsible development” of the most powerful AI models.
Frontier models, as defined by the companies, surpass the capabilities of the most advanced existing models and can perform a wide range of tasks. Over time, the forum plans to establish a formal charter, appoint an advisory board, and collaborate with civil society groups. The companies are also extending an invitation to competitors and civil society organizations to become partners.
Microsoft’s president, Brad Smith, emphasizes that AI creators have a responsibility to ensure safety, security, and human control. OpenAI’s vice president of global affairs, Anna Makanju, emphasizes the urgency of their work. The forum’s global reach will connect it with G-7, OECD, and U.S.-EU Trade and Technology Council processes. Additionally, it will support the Partnership on AI and MLCommons.
The Forum’s main objectives are to advance AI safety research for responsible development of frontier models, minimize risks, and enable standardized evaluations of capabilities and safety. It also aims to identify best practices for the responsible development and deployment of frontier models.
Furthermore, it aims is to educate the public about the nature, capabilities, limitations, and impact of AI models. Additionally, the Forum collaborates with policymakers, academics, civil society, and companies to share knowledge about trust and safety risks. Lastly, it supports the development of applications that address societal challenges, such as climate change, cancer detection, and cybersecurity.
The sources for this piece include an article in Axios.